Vitalik Buterin Warns of AI’s Potential to Surpass Humanity
Ethereum co-founder Vitalik Buterin has voiced concerns that unchecked AI development could surpass human capabilities, potentially transforming it into the planet’s next “apex species.”
Buterin delved into the fundamental differences between AI and other human inventions, emphasizing the unique and potentially dangerous nature of AI’s rapidly advancing capabilities.
Buterin Warns of AI Surpassing Humanity
In a blog post from November 27, Buterin argues that AI, being a new form of intelligence, differs fundamentally from previous human inventions such as social media or the printing press. Its ability to rapidly augment its intelligence poses a significant potential for surpassing human cognitive skills.
Adding to the urgency of his message, the Ethereum co-founder emphasizes that this progression could lead to scenarios where superintelligent AI views humanity as a threat to its survival, potentially ending human existence.
Supporting his stance, Buterin referenced an August 2022 survey involving over 4,270 machine-learning researchers. This survey estimated a 5-10% chance of AI leading to human extinction.
Buterin also questions whether a world with highly intelligent AIs would satisfy us, stating the potential instability of outcomes where we become “pets.”
AI risk 2: would a world with superintelligent AIs even be a world we would feel satisfied living in? On the instability of outcomes other than us being pets, and the failures of scifi to present realistic alternatives. pic.twitter.com/o9z8s6QAej
— vitalik.eth (@VitalikButerin) November 27, 2023
Additionally, he highlights the danger of AIs enabling totalitarianism by leveraging surveillance technology, which authoritarian governments have already exploited to suppress opposition.
Buterin Advocates for Brain-Computer Interfaces
Buterin also emphasizes that while these claims are extreme, there are ways for humans to assert control and ensure a beneficial direction for AI development. He advocates for integrating brain-computer interfaces (BCIs) to mitigate the dangers.
These interfaces, which establish a communication pathway between the brain’s electrical activity and external devices like computers, could significantly shorten the communication loop between humans and machines. More importantly, BCIs could ensure that humans maintain meaningful agency over AI, reducing the risk of AI acting in ways unaligned with human values.
Buterin’s vision goes beyond just technological measures. He calls for “active human intention” in steering AI development towards benefiting humanity. This approach contrasts with solely profit-driven AI advancements, which may not always align with the most desirable outcomes for human society.
We are the brightest star. There is a lot of good that can come from ongoing human progress, into the stars and beyond. But there are big forks in the road and we need to choose carefully. Accelerate, but accelerate carefully and well. pic.twitter.com/vxPlJWFVGq
— vitalik.eth (@VitalikButerin) November 27, 2023
Concluding his reflections, Buterin remains optimistic, stating that humans are the universe’s brightest star, having developed technology that expanded human potential for thousands of years, with the hope of continuing this path in the future. He envisioned a future where human inventions, such as space travel and geoengineering, would preserve the beauty of Earthly life for billions of years.
Binance Free $100 (Exclusive): Use this link to register and receive $100 free and 10% off fees on Binance Futures first month (terms).