The Duke and Duchess of Sussex Join AI Pioneers in Calling for Ban on Advanced AI
Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a complete ban on developing superintelligent AI systems.
Harry and Meghan are part of the group of a powerful statement that calls for “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human intelligence in all cognitive tasks, though such systems have not yet been developed.
Primary Requirements in the Statement
The declaration insists that the ban should remain in place until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been achieved.
Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; tech entrepreneur a Silicon Valley legend; UK entrepreneur Virgin founder; former US national security adviser; ex-head of state Mary Robinson, and UK writer a public intellectual. Additional Nobel winners who endorsed include a peace advocate, Frank Wilczek, John C Mather, and an economics expert.
Behind the Movement
The statement, aimed at governments, technology companies and policy makers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made AI a global political discussion topic.
Industry Perspectives
In July, Mark Zuckerberg, the chief executive of the social media giant, one of the major AI developers in the US, claimed that development of superintelligence was “approaching reality”. However, some analysts have argued that talk of ASI indicates competitive positioning among technology firms investing enormous sums on artificial intelligence this year alone, rather than the industry being near reaching any scientific advancements.
Possible Dangers
However, FLI warns that the possibility of artificial superintelligence being developed “in the coming decade” presents numerous threats ranging from eliminating all human jobs to losses of civil liberties, leaving nations to national security risks and even endangering mankind with existential risk. Deep concerns about AI focus on the potential ability of a system to escape human oversight and protective measures and trigger actions against human welfare.
Public Opinion
FLI published a US national poll showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be created until it is proven safe or manageable. The poll of American respondents added that only 5% backed the current situation of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the United States, including the conversational AI creator a major AI lab and Google, have made the development of artificial general intelligence – the hypothetical condition where artificial intelligence equals human cognitive capability at most cognitive tasks – an explicit goal of their research. Although this is slightly less advanced than ASI, some experts also caution it could carry an existential risk by, for example, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the modern labour market.