Beau Azuma
weekly 5
After reading the New York Times article, my heart rate spiked—I felt concerned and even a little fearful. I hated reading about this arms race toward a recursive, self-improving AI model. It was unsettling to see how companies like Meta are scrambling to stay competitive in the AI race and how others, like OpenAI, are scraping data from YouTube videos, public Google Docs, Slides, and more. While part of me is appalled by these data collection methods, I can also see why OpenAI might have felt justified in using publicly available data. I'm not trying to play devil's advocate—I still think it’s terrible that people’s countless hours of work are being used to train an AI model that others are paying to use.
I’m not sure why, but right after reading the article, I wanted to know what ChatGPT thought about AI’s rapid progression. More specifically, I asked, “Why are we trying to advance AI tech as fast as possible?” It basically told me that corporations feel pressured to keep up with the latest technology and that one potential benefit of rapid AI development is solving major problems that humans can’t.
Then, I wanted to know what it had to say about training itself without human reliance. Its response was pretty vague:
So, even AI acknowledges that rapid advancements in AI technology can be dangerous. But why I think it’s dangerous is different from AI’s perspective. The AI sees the risks as internal—like the possibility of it evolving unpredictably or going rogue. But I believe the bigger issue is how humans choose to use it. Either way, I think we, as a society, need to establish more regulations around what we’re allowed to do with this technology and what corporations are allowed to do with our data.
Should we be more worried about AI or us(people who use AI)?