News Update: WILL WE NEVER LEARN?

Undefined Undefined News Update

Content

NEW UNITED NewsOrg

>>
Will We Never Learn?
by Parker Terrell
Staff Writer

We aren’t all historians.

Maybe that’s part of the reason that we seem to forget the sins of our fathers, and their fathers, and on back to the dawn of civilization. Or perhaps it’s merely the fact that for the Human race, can and should are frequently treated as interchangeable.

Before you close comm channels on me for being some modern-day Luddite, let me make it clear that I am neither anti-progress nor anti-technology. Innovation is what makes Humanity so great – and what got us out into this grand universe in the first place. It is a given that progress generally requires risk.

The problem is that we often approach technology with a set of blinders on as to what has gone before, whether because the bottom line is more important than cautionary tales or because we’re prideful enough to think that we’ll do better this time. With few exceptions, each generation is more advanced than the last, and so we believe that we cannot possibly err in the same fashion as our parents did. So here we are again. ArcCorp’s new AI intiative, which has been roundly debated since their announcement two weeks ago, again seems to be running towards the same pitfalls as our ancestors.

We can’t possibly make the same mistakes.

Sometimes, it isn’t even our fault that we forget. It took nearly ten years of media investigation to wheedle the truth out of the government that maybe, possibly (sarcasm mine) the AI in the terraforming project had been at the heart of the Mars catastrophe of 2125. By then, nobody wanted to hear about it.

Of course, even that wasn’t the first AI disaster. The so called “Lemming Car” incident in 2044 Tokyo holds that honor. But less than 100 years later, the government decided that it had worked out the kinks in AI systems and shipped one off to Mars. Of course, they didn’t tell us about it, just in case they were wrong.

They were.

And then just over 100 years after the Mars catastrophe (are we seeing a trend here, people?), came the Artemis. Janus. Even with the vast spectacle of the launch, somebody somewhere had to be nervous that we were still treading the dangerous road of artificial intelligence. But this time, they were sure they had gotten it right.

Admittedly, we’re not certain that the disappearance of 5000 people was entirely the AI’s fault, but it certainly had to have played a part.

With a failure that public, the genie went back into the bottle for a while. But we weren’t done. The UEEN merely went back to being covert when they launched their latest failed AI project. Recently declassified documents show just how close we came to war during what became known as the Horus Incident. Working in conjunction with Aegis, the Navy deployed a prototype AI-piloted Overlord bomber wing along the Xi’An front. Their hearts were in the right place: it was an attempt to bring home pilots that had been stationed along the Perry Line for their entire careers. But when Aegis’ billion-credit babies decided that their comms were compromised and shut them off – just in time to miss their recall order – we had to chase down and destroy our own mistake, the death knell for Aegis in the shipbuilding industry. Again, we aren’t sure that they would have plunged us into all-out war with the Xi’An by wandering around unsupervised in the neutral zone, but what might they have done?

Of course, the Imperator tried to keep the whole embarrassing sequence out of the feeds. Still, at least it seemed that we had finally learned a healthy enough fear of ever using AI in ships again.

Now it’s the corporate sector’s turn to believe that it won’t repeat the mistakes of prior generations.

ArcCorp tells us that its AI will be able to learn, which will enable it to succeed where others have failed. (Where have we heard that one before?) But here’s the thing: Humans do that, too. They have to have years of experience, specific training, and the proper licenses to ever venture out into space. But ArcCorp wants to once again send the artificial equivalent of children out in massive starships to navigate the vastness of our interstellar empire.

Hey, it’s progress. It protects Human lives. It must happen.

Why can’t we learn?

END FEED
NEU VEREINIGTE NachrichtenOrganisation

>>
Werden wir es nie lernen?
von Parker Terrell
Personalverfasser

Wir sind nicht alle Historiker.

Vielleicht ist das ein Teil des Grundes, warum wir die Sünden unserer Väter und ihrer Väter zu vergessen scheinen und zurück zu den Anfängen der Zivilisation. Oder vielleicht ist es nur die Tatsache, dass für die menschliche Rasse, kann und sollte oft als austauschbar behandelt werden.

Bevor du die Kommunikationskanäle für mich schließt, weil ich ein moderner Luddite bin, möchte ich klarstellen, dass ich weder Antiprogress noch Anti-Technologie bin. Innovation ist das, was die Menschheit so großartig macht - und was uns überhaupt in dieses Große Universum geführt hat. Es ist selbstverständlich, dass Fortschritt im Allgemeinen Risiken mit sich bringt.

Das Problem ist, dass wir uns der Technologie oft mit einer Reihe von Scheuklappen nähern, was bisher passiert ist, sei es, weil das Endergebnis wichtiger ist als Vorsichtsgeschichten oder weil wir stolz genug sind, zu denken, dass wir es diesmal besser machen werden. Bis auf wenige Ausnahmen ist jede Generation weiter entwickelt als die letzte, und deshalb glauben wir, dass wir nicht in der gleichen Weise irren können wie unsere Eltern. So, jetzt sind wir wieder da. Die neue KI-Initiative von ArcCorp, die seit ihrer Ankündigung vor zwei Wochen intensiv diskutiert wurde, scheint wieder auf die gleichen Fallstricke zuzulaufen wie unsere Vorfahren.

Wir können unmöglich die gleichen Fehler machen.

Manchmal ist es nicht einmal unsere Schuld, dass wir vergessen. Es dauerte fast zehn Jahre, bis die Regierung die Wahrheit herausfand, dass vielleicht, möglicherweise (Sarkasmusmine) die KI im Terraforming-Projekt im Mittelpunkt der Marskatastrophe von 2125 stand. Bis dahin wollte niemand etwas davon hören.

Natürlich war selbst das nicht die erste KI-Katastrophe. Der so genannte "Lemming Car"-Vorfall im Jahr 2044 in Tokio gilt dieser Ehre. Aber weniger als 100 Jahre später entschied die Regierung, dass sie die Knicke in KI-Systemen ausgearbeitet hatte und schickte einen auf den Mars. Natürlich haben sie uns nichts davon erzählt, nur für den Fall, dass sie sich irren.

Das waren sie.

Und dann, etwas über 100 Jahre nach der Marskatastrophe (sehen wir hier einen Trend, Leute?), kamen die Artemis. Janus. Selbst mit dem riesigen Spektakel des Starts musste jemand irgendwo nervös sein, dass wir immer noch den gefährlichen Weg der künstlichen Intelligenz beschreiten. Aber diesmal waren sie sich sicher, dass sie es richtig gemacht hatten.

Zugegebenermaßen sind wir uns nicht sicher, ob das Verschwinden von 5000 Menschen ausschließlich die Schuld der KI war, aber es hätte sicherlich eine Rolle spielen müssen.

Mit einem Misserfolg in der Öffentlichkeit ging der Geist für eine Weile zurück in die Flasche. Aber wir waren noch nicht fertig. Die UEEN sind lediglich wieder versteckt, als sie ihr letztes gescheitertes KI-Projekt starteten. Kürzlich freigegebene Dokumente zeigen, wie nah wir dem Krieg während des so genannten Horus-Vorfalls gekommen sind. In Zusammenarbeit mit Aegis setzte die Marine einen Prototyp eines KI-gelenkten Overlord-Bomberflügels entlang der Xi'An-Front ein. Ihr Herz war am rechten Fleck: Es war ein Versuch, Heimpiloten, die entlang der Perry-Linie stationiert waren, für ihre gesamte Karriere zu gewinnen. Aber als die Milliarden-Kredit-Babys von Aegis entschieden, dass ihre Kommunikation kompromittiert war und sie abschalteten - gerade noch rechtzeitig, um ihren Rückrufauftrag zu verpassen -, mussten wir unseren eigenen Fehler, die Todesglocke für Aegis in der Schiffbauindustrie, jagen und zerstören. Nochmals, wir sind uns nicht sicher, ob sie uns in einen totalen Krieg mit den Xi'An gestürzt hätten, indem sie unbeaufsichtigt in der neutralen Zone herumgelaufen wären, aber was hätten sie tun können?

Natürlich versuchte der Imperator, die ganze peinliche Sequenz aus den Feeds herauszuhalten. Dennoch schien es zumindest, dass wir endlich eine ausreichend gesunde Angst davor hatten, die KI jemals wieder in Schiffen einzusetzen.

Jetzt ist der Unternehmenssektor an der Reihe zu glauben, dass er die Fehler früherer Generationen nicht wiederholen wird.

ArcCorp sagt uns, dass seine KI lernen kann, was es ihr ermöglicht, dort erfolgreich zu sein, wo andere versagt haben. (Wo haben wir das schon mal gehört?) Aber hier ist die Sache: Menschen tun das auch. Sie müssen über jahrelange Erfahrung, spezifische Schulungen und die entsprechenden Lizenzen verfügen, um sich jemals ins All zu wagen. Aber ArcCorp will wieder einmal das künstliche Äquivalent von Kindern in riesigen Raumschiffen losschicken, um durch die Weiten unseres interstellaren Reiches zu navigieren.

Hey, es ist ein Fortschritt. Es schützt Menschenleben. Es muss passieren.

Warum können wir nicht lernen?




ENDE VORSCHUB
NEW UNITED NewsOrg

>>
Will We Never Learn?
by Parker Terrell
Staff Writer

We aren’t all historians.

Maybe that’s part of the reason that we seem to forget the sins of our fathers, and their fathers, and on back to the dawn of civilization. Or perhaps it’s merely the fact that for the Human race, can and should are frequently treated as interchangeable.

Before you close comm channels on me for being some modern-day Luddite, let me make it clear that I am neither anti-progress nor anti-technology. Innovation is what makes Humanity so great – and what got us out into this grand universe in the first place. It is a given that progress generally requires risk.

The problem is that we often approach technology with a set of blinders on as to what has gone before, whether because the bottom line is more important than cautionary tales or because we’re prideful enough to think that we’ll do better this time. With few exceptions, each generation is more advanced than the last, and so we believe that we cannot possibly err in the same fashion as our parents did. So here we are again. ArcCorp’s new AI intiative, which has been roundly debated since their announcement two weeks ago, again seems to be running towards the same pitfalls as our ancestors.

We can’t possibly make the same mistakes.

Sometimes, it isn’t even our fault that we forget. It took nearly ten years of media investigation to wheedle the truth out of the government that maybe, possibly (sarcasm mine) the AI in the terraforming project had been at the heart of the Mars catastrophe of 2125. By then, nobody wanted to hear about it.

Of course, even that wasn’t the first AI disaster. The so called “Lemming Car” incident in 2044 Tokyo holds that honor. But less than 100 years later, the government decided that it had worked out the kinks in AI systems and shipped one off to Mars. Of course, they didn’t tell us about it, just in case they were wrong.

They were.

And then just over 100 years after the Mars catastrophe (are we seeing a trend here, people?), came the Artemis. Janus. Even with the vast spectacle of the launch, somebody somewhere had to be nervous that we were still treading the dangerous road of artificial intelligence. But this time, they were sure they had gotten it right.

Admittedly, we’re not certain that the disappearance of 5000 people was entirely the AI’s fault, but it certainly had to have played a part.

With a failure that public, the genie went back into the bottle for a while. But we weren’t done. The UEEN merely went back to being covert when they launched their latest failed AI project. Recently declassified documents show just how close we came to war during what became known as the Horus Incident. Working in conjunction with Aegis, the Navy deployed a prototype AI-piloted Overlord bomber wing along the Xi’An front. Their hearts were in the right place: it was an attempt to bring home pilots that had been stationed along the Perry Line for their entire careers. But when Aegis’ billion-credit babies decided that their comms were compromised and shut them off – just in time to miss their recall order – we had to chase down and destroy our own mistake, the death knell for Aegis in the shipbuilding industry. Again, we aren’t sure that they would have plunged us into all-out war with the Xi’An by wandering around unsupervised in the neutral zone, but what might they have done?

Of course, the Imperator tried to keep the whole embarrassing sequence out of the feeds. Still, at least it seemed that we had finally learned a healthy enough fear of ever using AI in ships again.

Now it’s the corporate sector’s turn to believe that it won’t repeat the mistakes of prior generations.

ArcCorp tells us that its AI will be able to learn, which will enable it to succeed where others have failed. (Where have we heard that one before?) But here’s the thing: Humans do that, too. They have to have years of experience, specific training, and the proper licenses to ever venture out into space. But ArcCorp wants to once again send the artificial equivalent of children out in massive starships to navigate the vastness of our interstellar empire.

Hey, it’s progress. It protects Human lives. It must happen.

Why can’t we learn?

END FEED

Links

No links available.

Images

3
image/jpeg
NewUnitedWillWeFI_Crop.jpg
Details
Last Modified
12 years ago
Size
343.61 KB
image/jpeg
source.jpg
Details
Last Modified
6 years ago
Size
823.29 KB
image/jpeg
source.jpg
Details
Last Modified
4 years ago
Size
879.80 KB

Metadata

CIG ID
12977
Channel
Undefined
Category
Undefined
Series
News Update
Comments
142
Published
13 years ago (2013-04-30T00:00:00+00:00)