Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

If AI gets too powerful, it could start making decisions in military battles

A MARTÍNEZ, HOST:

Artificial intelligence technology is advancing faster than anyone, even its developers, expected it to. That's why experts are asking everyone to hit the pause button while governments catch up to impose guardrails on AI systems before - worst-case scenario - the potentially dangerous tech drives humans extinct.

Max Tegmark is a professor of physics at the Massachusetts Institute of Technology and one of the experts who signed the open letter on the dangers of AI. He says one example of the potential dangers of AI is how it's being used as a weapon.

MAX TEGMARK: What's new with AI in the military is this shift towards actually letting AI make decisions about who gets killed and who doesn't, which are traditionally reserved for people.

MARTÍNEZ: Now, letting AI make these kinds of decisions - are they done with enough guardrails where humans can still be in charge?

TEGMARK: No. It's all up to whoever owns the weapons, right? So, for example, the United Nations had a report about how fairly recently, Turkish drones were used in a fully autonomous mode to hunt down and kill fleeing people. The guardrails there were whatever the militia or whoever controlled those Turkish-purchased drones want them to be.

MARTÍNEZ: In those situations, I mean, is there nothing that can be done other than a flat-out war and a battle against those kind of countries that would have that kind of artificial intelligence? I mean, is that what we're looking at? There is no negotiation, it seems like, with that kind of AI.

TEGMARK: Well, there's plenty that can be done. It's just that political will has been sort of lacking so far. With biological weapons, for example, we came together - the major military powers of the world - and decided these are disgusting weapons. We banned biological weapons, and it's a great success. And we could do the same with lethal autonomous weapons, too, if we just agreed that it's a disgusting idea to delegate to machines decisions about who should get killed, who's the good guy and who's the bad guy, and that that responsibility should always remain with humans.

MARTÍNEZ: Let's hear from General Mark Milley, chairman of the Joint Chiefs of Staff. He talked about the military implementation of the technology.

MARK MILLEY: The United States policy right now, actually, with respect to artificial intelligence and its application to military operations is to ensure that humans remain in the decision-making loop. That is not the policy, necessarily, of adversarial countries that are also developing artificial intelligence.

MARTÍNEZ: OK. So, Professor, on our end, if we're putting these kinds of guardrails and these kinds of controls on our artificial intelligence, how wise is that if we see the rest of the world not necessarily agreeing with what we do?

TEGMARK: It's ridiculous, as you point out, 'cause what Mark Milley is saying here is that, A, this stuff is going to be decisive in the battlefield, and B, we are going to hold ourselves to high moral standards where it's a human making the decision. If we're going to have these high standards, we should insist that everybody else has those high standards, also. We just need the U.S. to push hard internationally for a ban on a narrow class of weapons.

MARTÍNEZ: So where does that put us right now, then? Because here's the thing - I mean, I think a lot of people, you know, hear these stories of artificial intelligence in the military, artificial intelligence development and automatically think of their favorite movie or TV show where it shows that it's inevitable that we will eventually be replaced, if not used, for energy. So what point are we right now where we kind of try and take that fantasy out of our heads and deal with the reality of what we have in front of us?

TEGMARK: The reality is it's pretty likely to happen the way we're going, but it's not inevitable. You know, if you go to Ukraine and tell people, hey, it's inevitable that you're going to lose to the Russians, so just give up, people would be pissed at you. And I get similarly upset when people say it's inevitable that we're going to get replaced by AI because it's not, unless you convince yourself that it's inevitable and make it a self-fulfilling prophecy. We're building this stuff.

So, you know, I've been working for many years here at MIT with my AI research group on how we can get machines to understand our goals and adopt them and keep them as they get smarter. And it looks like we will be able to solve these problems, but we haven't solved it yet. We need more time. And unfortunately, it's turned out to be easier to just build super-powerful AI and make tons of money on it than it's been solving this so-called alignment problem. That's why many of us have called for a pause to put some safety standards in place to give the community enough time so we can get to this good future and not rush so fast that we just wipe ourselves out in the process instead.

MARTÍNEZ: That's Max Tegmark. He's a professor of physics at the Massachusetts Institute of Technology and one of the experts who signed an open letter on the dangers of AI. Professor, thanks a lot.

TEGMARK: Thank you very much. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.