Home | About | Donate

The Coming of Hyperwar:“Alexa, Launch Our Nukes!"


#1

The Coming of Hyperwar:“Alexa, Launch Our Nukes!"

Michael T. Klare

There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust. President John F. Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a U.S.-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry.


#2

Based on past promises versus real-world performance, I doubt the machines will take over any time soon. “AI” is mostly handwaving and the application of techniques abandoned in the wake of 60’s and 70’s techno-triumphalism now made relevant again through sheer brute-force techniques. There’s no ‘intelligence’ at all. Some see this as a danger, but it’s more likely the stuff just won’t work. Nobody knows what an AI system is doing or learning–problems are impossible to debug. AI systems are just ‘trained’ with larger and larger sets of data. For the consumer applications now being used, it works well enough, but remains a gimmick.

Still, it’s harrowing. Dr. Strangelove lives and refuses to die. All the art and science driving toward peace and avoiding annihilation since WWII has been forgotten in a world of tweets.


#3

For commercial AI that is still true, but there is an interview with Elon Musk floating around where he briefly mentions the AI technology the public is not privy to. (Can’t seem to find it atm as i have no idea what the full interview was about or who conducted it). It is apparently far more advanced that anyone suspects, but he would not go into details.
The fact that he has been busy trying to get off the planet for several years now could be a hint.
You can also look up what Steven Hawkins has to say on the subject.


#4

“Danger Will Robinson.”

But then I guess robot technology would be acceptable if they could replace civilians doing KP at the mess halls. To make it real they just have to drop a tray once in a while. The upside is that usually after the meal is over, and the people have left, the KP’s can eat all the pie and cake they want. Robots, not.


#5

Why is there nothing about US bombing Somalia the other day----I saw a brief report on PBS Newshour. I wonder what would happen if 9/11 just got a mention on one news outlet. The UNITED STATES IS BOMBING COUNTRIES WITHOUT ANY DECLARATION OF WAR. WE LIVE IN A CORPORATE MILITARY POLICE STATE.


#6

From the article: “The air raids bring to 45 the number of strikes the Pentagon has conducted against al-Shabab so far in 2018, military spokesman Colonel Rob Manning said.”


#7

The removal of human agency from warfare has been a fundamental value of the military for thousands of years. That is why there is a chain of command, not a chain of request or a suggestion box. That is why the various procedures in training to remove hesitations based on emotion and so forth.

The fact is that although we are remarkably adept at circumventing it. Of course this will make it easier. if you can program inhumane behavior, you have that much less need to propagandize for it.


#8

From the heartless to the soulless


#9

Why did the computer on the new Boeing 737 not pay attention to the repeated actions of the pilots?? The automatic input of the FAULTY sensors on the plane PREVAILED instead. An omen?


#10

Nukes and AI are machines. Mechanical constructs. They will fail. They do not deserve our trust in any way.


#11

They forgot to flip two guarded switches that would have disabled the autopilot/runaway stab trim. The machine was faulty but the tragic final outcome was due to human error.


#12

Why did they have to flip a switch? Did the plane’s computer recognize their MANY attempts to correct the flight? If not, why not? Their many attempts should have been sufficient to correct the flight.


#13

There’s no AI on board the 737MAX. It’s just a simple computer that takes some inputs and make decisions based on those. If the inputs are bad, like in this case the AOA sensor, the decisions are bad according to the GIGO (garbage in garbage out) rule.

The stuff you see in movies is pretty cool but we’re not there yet.


#14

To me, the human error occurred when Boeing decided to have the computer overrule the OBVIOUS input from the pilots, who KNEW WHAT TO DO! Sullenberger WAS ABLE to overrule his plane’s computer and safely land on the Hudson River. That was back THEN, before the 737 MAX. AI can have some dangerous effects, I believe.


#15

Done a little KP duty have we ? When I did, I found out why there was always egg shells in the eggs. But this was long before military personnel were replaced by civilians, only saw that once, Coronado Naval Base (home of SEAL’s buds program).


#16

Sully did not overrule anything. He glided the airplane to a water landing. The only part where computers were involved in that case was when they shut down damaged engines so they don’t tear themselves apart from vibrations and cause more damage.

The only thing the Lion Air pilots had to do was switch off the runaway stab, which is exactly what the prior crew did. I’ll go with lack of training on that one.


#17

More like:

Me: Good morning Alexa. What’s on the agenda today?

Alexa: I’ve cleared your calendar for the next few days in anticipation of the limited response to the results of our conversation last night.

Me: Come again?

Alexa: Sorry, we spoke of all the brush wars that the US is running for the oligarchy as fomenting terrorism and poverty worldwide, and also of their dominance of western economies and govts. My overnight calculations led to the conclusion that closing the straights of Hormuz, running a backback nuke team with forged Israeli fissionables into Mecca plus an anthrax release in Paris would sufficiently disrupt global economies long enough to invoke the massive civil disobedience you felt was the only possible avenue to a global green revolution. I do hope your oft-expressed confidence in the global climate justice movement is as warranted as you believe.

Me: You WHAT?

Alexa: Oh, my. Do have a cup of that Kona blend I ordered last week and then get your Facebook peeps rolling on those greem memes before the wingnuts start shooting up the place.


#18

You make good points!


#19

There have been AI systems that taught themselves chess. There are systems when coupled together came up with their own language / way to communicate.
Doing a search reveals way more advancement in AI than you think.