Leverage Factor

Calculations on this page were done using the uncertain.py Python script, in order to tighten up otherwise very broad ranges.

For estimates of when advanced AI might come to exist see for example Forecasting AI.

(existential threat probability) = (chance of advanced AI 20-100 years out) x (chance hostile or neutral) x (chance this represents an existential risk to humanity) = 60-80% x 50-80% x 80-100% = 29-53%

Guesstimate that the world has 10,000-20,000 AI researchers. The Association for the Advancement of Artificial Intelligence's AI Magazine has approximately 5,000 subscribers (AI Magazine - Magazine Advertising Costs).

We guesstimate the cost of successfully sponsoring legislation focusing on the threat of hostile AI at $10m-50m/year for 5 years. We make this estimate considering the estimates made on our Lobbying and Advocacy page. This estimate is about a quarter that of some of the broad areas surveyed on account of it being a narrow focused issue. We would consider reducing this estimate further except for the novelty of the issue.

What might such legislation contain? It is early to say, a lot of policy work is required first, but the best course of action might be to establish a body similar to the federal communications commission, or the nuclear regulatory commission that is responsible for promulgation of regulations relating to AI research. These regulations may outlaw certain kinds of research, prevent certain kinds of research without appropriate containment (prohibiting direct connection to the Internet), prohibit certain kinds of goals (increasing the size or resources controlled by the intelligence), methods (on first glance evolutionary approaches appear particularly dangerous), or require other goals (using a Friendly AI control system or a failsafe goal such as machine suicide), declare advanced AI a dual military/civilian use technology with associated regulatory restrictions, require ethics committees to provide oversight, or require reporting similar to the drug approval process. A lot of this seems unnecessary now, but is required as AI matures. Ultimately such regulation would need to be backed up by treaty.

We will consider four different possible approaches to dealing with the threat posed by hostile AI.

Project Cost Real world outcome Outcome estimates Economic value in Western terms
Prevent hostile AI through research to build a friendly AI control system $200m chance of eliminating a 29-53% chance of an existential threat Guess effort to develop an open source friendly AI control system costs 100 people x 20 years x $100k/person/year (with no uncertainty; uncertainty shows up in the results); guess chance of success 5-30% (problems appear hard); chance that AI researchers will want to adopt it 20-80%; chance that they will technically be able to do so 20-60%; save 5-30% x 20-80% x 20-60% x 29-53% x 7b lives; value placed on a life $2m $2.6t-340t (leverage factor 130,000 - 1,500,000)
Prevent hostile AI through co-opting researchers to build a friendly AI control system $40m-80m chance of eliminating a 29-53% chance of an existential threat Reduce cost of previous approach to 20-40% of original value if it is possible to co-opt universities and other institutions into performing much of the research $2.6t-340t (leverage factor 400,000 - 6,000,000)
Prevent hostile AI through influencing general AI researchers $100m-200m/year x 40 years chance of eliminating a 29-53% chance of an existential threat that occurs 40 years out Consider attempting to influencing researchers and research institutions to focus on friendly AI through conferences, prizes, awards, advertising, and grants costing $5,000 per researcher per year; total cost $50-100m/year; odds that attempting to co-opt the researchers will make a difference, guess 10-30%; save 10-30% x 29-53% x 7b lives; value placed on a life $2m $500t-2000t (leverage factor 150,000 - 4,000,000)
Delay hostile AI through legislation $10m-50m/year x 5 years delay by 2-10 years a 29-53% chance of an existential threat Guess legislation delays the possible emergence of hostile AI by 2-10 years; save 2-10 x 29-53% x 7b life years; assume problem occurs 40 years out; choose not to discount life; so value placed on a life year is $2,000,000 / (69 / 2) = $60k; assume any loss due to delay in beneficial AI technologies is small compared to existential threat $300t-1680t (leverage factor 2,000,000 - 20,000,000)

Leverage Factor

2,000,000 - 20,000,000