Acceleration AI Ethics

A framework for managing the tension between innovation and safety in artificial intelligence.

 

 

Debate: Should we slow AI innovation to human speeds, or accelerate human ethics to AI speeds?


 

Proponents of slow AI and of acceleration want the same thing: artificial intelligence serving humanity instead of the other way around. But, they search in opposite directions.

The initial option produced the famous 6 month pause, the slow AI movement, and it animates regulations including the GDPR, the CCPR, the AI Act. Acceleration goes in the other direction. By turning to innovation to solve (not define) innovation problems, humanism and ethics rises to AI velocity.

These are arguments in favor of acceleration.

1. Acceleration is safer than precaution.

There is more safety in AI solving AI problems than there is in limiting the problems AI causes.

While precaution and hesitant AI may diminish the number of human harms produced by AI, and diminish the severity of others, those benefits will be more than canceled by the benefits accelerated AI accrues, both directly and as resolutions to already existing harms.

In the area of fairness, for example, AI finance and loaning may draw digital redlines, but the relatively objective determinations of algorithms remedies a still larger share of unjust discrimination in owed to human decisions. This is especially true if AI-enabled micro-targeting allows for the location of good credit risks within populations typically rejected by conventional loan criteria. Further benefits include increased efficiency and general accuracy.

2.Acceleration maximizes efficiency.

When ethicists team with engineers to cite and address problems as part of the same creative process generating advances, ethical dilemmas are flagged and engineers proceed directly to resolution in the design phase: there is no need to pause or stop innovation, only redirect it.

In practical reality and for functioning teams with embedded ethics, recognizing and understanding humanist problems leads organically to resolving them quickly, at least within the narrow confines of a single AI application and a limited group of human participants.

3. Only acceleration manages digital speeds.

As technological advances come too quickly for their corresponding risks to be foreseen, oncoming unknown unknowns will require users to flag and respond, instead of depending on experts to predict and remedy. At high speeds, only decentralized ethics maintains its integrity.

4. The chaotic mob is preferable to authoritarian imposition.

The promise of decentralization is agility to meet fast changes in technology and culture, but there is a threat of chaotic mobbism. The promise of centralized ethics is consistency and order in digital spaces, but the threat is ponderousness, and the authoritarianism of the regulators. One is not always preferable to the other – a reasonable balance will always be required – but in the gray areas, the direction of the many is preferable to the impositions of the few.

5. Acceleration explains what actually happens in the world.

Acceleration ethics means engineers no longer need to justify starting their models because initiating is justifying. And, there will be no stopping until the harm caused by the particular innovation is demonstrated to be greater than the value of the innovation. This attitude corresponds with the reality of AI research today. Of course, engineers may be the first ones to step back from their work and call for restraint, but that is only after – and because of – the preceding innovation.