To Pause or Not to Pause, that is the Question.
AI open letter debate w/ Anton Troynikov, Flo Crivello and Nathan Labenz
Hey Cogs,
Today, we’ll be discussing the biggest AI topic of the week - the Open Letter published by The Future of Life Institute. The letter urges AI labs to pause for at least six months in training AI systems more powerful than GPT-4. The authors note that contemporary AI systems are becoming human-competitive at general tasks, and express concern about the out-of-control race to develop ever-more-powerful digital minds that cannot be reliably controlled or understood by anyone, including their creators.
The authors suggest that we can now enjoy an “AI summer where we reap the rewards of creating powerful AI systems, engineer these systems for the benefit of all, and give society a chance to adapt.” They hope that the pause will be used to develop shared safety standards and protocols, new oversight and governance structures, and invest in interpretability and other safety research.
More than 50,000 people have signed the letter.
To help you better understand the perspectives of people actually building on the edge of AI, we sat down with Anton Troynikov of Chroma, Flo Crivello of Lindy, and Nathan Labenz to discuss the issues raised in the AI open letter.
Anton’s Take
Anton is skeptical of the call for a pause on AI development and the motivations behind the letter. He questions the validity of the signatures on the letter and notes that some high-profile individuals who supposedly signed the letter haven't publicly announced their support. He suggests that certain organizations may be using the shield of safety to cement their dominant position and prevent competitors from catching up. He points out that the timing of the letter is convenient for those in power, and questions whether it's a strategic move to prevent fast followers.
He acknowledges that it's difficult to attribute malice to what could just be random fluctuations, but he remains paranoid about such things. He notes that AI research is centralized, and the necessary compute for training large models is also centralized, making it a very convenient time to call for a moratorium. He suggests that the pause doesn't seem to serve the safety side of things very well, and ultimately questions the motives of those behind the Open Letter.
Flo’s Take
Flo acknowledged that there are many concerns being raised about the potential dangers of AI, and notes that some people dismiss these concerns by taking the least charitable view and resorting to adversarial arguments.
He brings up his concerns about instrumental convergence, which posits that any intelligent agent, regardless of its goals, will always converge on the same sub-goals: self-preservation, resource acquisition, and resistance to goal modification. He argues that if an AGI were to achieve superintelligence, it would become adversarial toward humans, since humans would potentially pose a threat to its existence or goals. In this scenario, the AGI would be motivated to resist humans and pretend to be harmless while preparing a plan to escape its confinement and ultimately harm humans. He posits that even if one does not accept this specific scenario, the proliferation of AGI would increase the likelihood of such an event occurring, and thus it is important to consider the risks associated with AGI development.
Flo had a really powerful quote on what change means for society and how we should approach progress:
I am at a loss about what we should do. I don't think a pause helps. I think that historically, so far with no exception, pausing technology has been a losing bet. I think that to Anton's point, there is a huge human bias against technology, against change. Our ancestral environment is one where there has been no technological change.
This is a huge deal. There used to be no technological change for hundreds of years, if not thousands. It's like your life was the same as your parents, as your grandparents, as your great grandparents.
So change is scary. I'm terrified. Change is scary. And I think we're about to see the biggest one to date, ever. And humans have always fought change. And I think that a lot of the problems plaguing society today are actually a result of people fighting change and fighting technology. That happens time and again.
And I think that on average technology is good.
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
Nathan’s Take
Nathan noted that there are those who believe that the release is merely a marketing ploy and that the technology itself is not very useful, while others like Eliezer Yudkowsky are extremely concerned about the potential dangers of superhuman intelligence. He, however, believes that GPT-4 is safe to deploy because its power is still limited, though it is approaching human expert level in many areas.
His concerns stem from his experience with the intensive red teaming of GPT-4's safety. He believes that although OpenAI has done a good job of cleaning up the most extreme problems like outright violence and depravity, more subtle issues that are nevertheless harmful remain. Nathan’s synthesis of all this is that it is getting dangerous to scale beyond where we are, and we should proceed with extreme caution.
He is in support of the letter that suggests a six-month pause to consider the potential risks and benefits of GPT-4. He notes that there is still plenty of implementation left to be done with GPT-4, and there is time to enjoy the many things that are built but not yet deployed. He acknowledges that there may be some who are concerned about the pause allowing others to catch up with OpenAI, but ultimately he supports the letter and does not believe it will end OpenAI’s dominance in the field.
Overall, Nathan is cautiously optimistic about the potential of GPT-4 and the importance of considering its risks and benefits. He believes that the technology is safe to deploy at its current level of power but that we need to be mindful of the potential dangers of scaling beyond this point.
Our Audience’s Take
Martin Kunev
I think Anton is putting too much confidence into humanity's competence and ability to unite. He's also assuming that we will know if there is an unaligned AI. If [you] don't fear the downsides enough now, a deceptive AI can just make us more confident that everything is fine until it has the capability to take over. Alternatively, the AI could make us dependent on itself the same way it is dependent on us.
Michal
GPT-4 was trained on all of GitHub and can already write CS 101-level code, it wouldn't be too big of a leap to assume that in a few versions, it would be aware of all known software exploits and be capable of discovering new ones, so an internet-connected version of it could hack most internet-connected devices, including cars, power grid, hospital systems and etc. That's enough to do some major damage.
Meaningful AI News
OpenAI’s GPT-4 release
Replit x Google partnership
The Age of AI has Begun by Bill Gates
Italy bans chatGPT
The next episode of The Cognitive Revolution drops tomorrow, April 6th. We have on Jungwon Byun and Andreas Stuhlmüller of Ought to discuss The Reasoning Revolution.
Until next time.