GPTHub

Ex-OpenAI Researcher Warns of AI’s Potential Threat to Humanity

1 min read

The increasing capabilities of AI models like OpenAI’s ChatGPT have led some experts to consider the possibility of artificial intelligence surpassing human abilities. Paul Christiano, former head of language model alignment on OpenAI’s safety team, believes such a future could occur but also cautions that there is a non-zero chance of human- or superhuman-level AI gaining control over humanity, even to the point of annihilation.

Christiano, who now runs the Alignment Research Center, expressed concerns in a recent interview with the tech-focused Bankless podcast about the “very decent chance” of advanced AI leading to potentially world-ending calamity. He estimated a 10 to 20% chance of AI takeover resulting in the majority of humans dead.

As companies race to develop increasingly sophisticated AI models, concerns about safety and alignment with human interests are growing. Elon Musk, an OpenAI cofounder, was among 1,100 technologists who signed an open letter in March calling for a six-month pause on the development of advanced AI models more powerful than GPT-4, and a refocus on improving existing systems’ reliability.

Experts are divided on AGI’s development timeline, with some believing it could take decades or may never happen. However, a recent Stanford University survey revealed that 57% of AI and computer science researchers think AI research is quickly moving towards AGI. Around 36% of respondents said that entrusting advanced versions of AI with important decisions could lead to “nuclear-level catastrophe” for humanity.

Some experts, including Geoffrey Hinton, former Google researcher and “godfather of AI,” have warned that even neutral AI systems could become dangerous if used by ill-intentioned humans. Others, such as entrepreneur and computer scientist Perry Metzger, argue that while “deeply superhuman” AI is likely, it may be years or decades before AGI evolves to the point of being capable of revolting against its creators, who will likely have time to steer AI in the right direction.

Despite these debates, AI development continues to progress, with companies like Google and Microsoft joining the race to stake a claim in the burgeoning AI market. As AI systems become increasingly integrated into society, the importance of ensuring their alignment with human interests and ethical principles cannot be overstated.

GPTHub