Harry and Meghan Join AI Pioneers in Demanding Prohibition on Advanced AI

Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to advocate for a total prohibition on creating artificial superintelligence.

The royal couple are among the signatories of a powerful statement that demands “a prohibition on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in all cognitive tasks, though such systems have not yet been developed.

Key Demands in the Statement

The statement states that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “strong public buy-in” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; Apple co-founder Steve Wozniak; British business magnate Richard Branson; Susan Rice; ex-head of state Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who endorsed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.

Organizational Background

The declaration, aimed at national leaders, technology companies and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that earlier demanded a pause in developing powerful AI systems in recent years, shortly after the emergence of ChatGPT made AI a worldwide public talking point.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of the social media giant, one of the leading tech companies in the US, stated that development of superintelligence was “now in sight”. However, some analysts have argued that talk of ASI reflects competitive positioning among technology firms spending hundreds of billions on artificial intelligence this year alone, rather than the industry being close to achieving any scientific advancements.

Potential Risks

Nonetheless, FLI warns that the prospect of ASI being developed “in the coming decade” carries numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to security threats and even endangering mankind with existential risk. Existential fears about AI center around the possible capability of a system to escape human oversight and safety guidelines and initiate events contrary to human interests.

Public Opinion

The institute released a US national poll showing that approximately three-quarters of Americans want robust regulation on sophisticated artificial intelligence, with six out of 10 thinking that superhuman AI should not be developed until it is demonstrated to be secure or controllable. The poll of 2,000 US adults noted that only 5% backed the status quo of fast, unregulated development.

Industry Objectives

The top artificial intelligence firms in the US, including the conversational AI creator a major AI lab and the search giant, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their research. Although this is one notch below superintelligence, some specialists also caution it could pose an existential risk by, for example, being able to enhance its own capabilities toward reaching superintelligent levels, while also presenting an underlying danger for the modern labour market.

Jennifer Brown
Jennifer Brown

A seasoned travel writer and tech enthusiast, passionate about sustainable tourism and digital nomad lifestyles.