Artificial Intelligence & Robotics, as Viewed From the Early 1950s

In the early 1950s, electronic computers were large and awe-inspiring, and were often referred to as ‘electronic brains’.  At the same time, industrial automation was making considerable advances, and much more was expected from it.  There was considerable speculation about what all this meant for Americans, and for the human race in general.

Given the recent advances of AI and robotics in our own era–and the positive and negative forecasts about the implications–I thought it might be interesting to go back and look at two short story collections on this general theme:  Thinking Machines, edited by Groff Conklin, and The Robot and the Man, edited by Martin Greenberg.  Both books date from around 1954. Here are some of the stories I thought were most interesting, mostly from the above sources but also a couple of them from other places.

Virtuouso, by Herbert Goldstone.  A famous musician has acquired a robot for household tasks.  The robot–dubbed ‘Rollo’ by the Masestro–notices the piano in the residence, and expresses interest in it.  Intrigued, the Maestro plays ‘Claire de Lune’ for Rollo, then gives him a one-hour lesson and heads off to bed, after authorizing the robot to practice playing on his own.  He wakes to the sound of Beethoven’s ‘Appassionata’.

Rollo was playing it. He was creating it, breathing it, drawing it through silver flame. Time became meaningless, suspended in midair.

“It was not very difficult,” Rollo explains.

The Maestro let his fingers rest on the keys, strangely foreign now. “Music! He breathed. “I may have heard it that way in my soul. I know Beethoven did.

Very excited, the Maestro sets up plans for Rollo to give a concert–for “Conductors, concert pianists, composers, my manager.  All the giants of music, Rollo.  Wait until they hear you play!”

But Rollo’s response is unexpected.  He says that his programming provides the option to decline any request that he considers harmful to his owner, and that therefore,  he must refuse to touch the piano again.  “The piano is not a machine,” that powerful inhuman voice droned.  “To me, yes.  I can translate the notes into sounds at a glance.  From only a few I am able to grasp at once the composer’s conception.  It is easy for me.”

“I can also grasp,” the brassy monotone rolled through the studio, that this…music is not for robots.  It is for man.  To me it is easy, yes…it was not meant to be easy.”

The Jester, by William Tenn.   In this story, it is not a musician but a comedian who seeks robotic involvement in his profession.   Mr Lester…Lester the Jester, the glib sahib of ad lib…thinks it might be useful to have a robot partner for his video performances.  It does not work out well for him.

Boomerang, by Frank Russell.  In this story, the robot is designed to be an assassin, acting on behalf of a group representing the New Order.   Very human in appearance and behavior, it is charged with gaining access to targeted leaders and killing them. If it is faced with an insoluble problem–for example, if the human-appearing ‘William Smith’ should be arrested and cannot talk his way out of the situation–then it will detonate an internal charge and destroy itself.  As a precaution, it has been made impossible for the robot to focus its lethal rays on its makers. And, it is possessed of a certain kind of emotional drive–“William Smith hates personal power inasmuch as a complex machine can be induced to hate anything.  Therefore, he is the ideal instrument for destroying such power.”

What could possibly go wrong?

Mechanical Answer, by John D MacDonald.  For reasons that are never explained, the development of a Thinking Machine has become a major national priority. After continued failures by elite scientists, a practical engineer and factory manager named Joe Kaden is drafted to run the project. And I do mean drafted: running the Thinking Machine project means being separated from his wife Jane, who he adores. And even though Joe has a record of inventiveness, which is the reason he was offered the Thinking Machine job in the first place, he questions his ability to make a contribution in this role.

But Jane, who has studied neurology and psychiatry, feeds him some ideas that hold the key to success.  Her idea…basically, a matrix of associations among words and concepts..allows the machine to show more ‘creativity’ than previous approaches, and it shows great skills as a kind of Super-ChatGPT question-answerer.

When the Thinking Machine is demonstrated to an audience which includes not only its American sponsors but the Dictator of Asia, the Ruler of Europe, and the King of the States of Africa, the questions to be asked have been carefully vetted.  But when it is asked an unvetted question–“Will the machine help in the event of a war between nations?”…the answer given is unexpected:  “Warfare should now become avoidable.  All of the factors in any dispute can be give to the Machine and an unemotional fair answer can be rendered.”

Of course.

Burning Bright, by John Browning.  A large number of robots are used to work in the radiation-saturated environment within nuclear power plants.  The internal mental processes of these robots are not well understood, hence, no robots are allowed outside of the power plants–it is feared that robot armies could be raised on behalf of hostile powers, or even that robots themselves will become rivals of humans for control of the planet.  So robots are given no knowledge of the world outside of power plants, no knowledge of anything except their duty of obedience to humans.  And whenever a robot becomes too worn-out to be of any continued usefulness, it is scrapped–and its brain are dissolved in acid.

One day, a robot facing its doom is found to have a molded plastic star in its hands–apparently a religious object.

Though Dreamers, Die, Lester del Rey.  Following the outbreak of a plague which looks like it may destroy all human life on earth, a starship is launched. A small group of humans, who must be kept in suspended animation because of the great length of the journey to a habitable planet, is assisted by a crew of robots.  When the principal human character, Jorgan, is awakened by a robot, he assumes that the ship must be nearing its destination.  It is, but the news is grim.  All of the other humans on board have died–Jorgen, for some reason, seems to be immune to the plague, at least so far. And among those who did not survive Anna Holt, the only woman.

If it had been Anna Holt who had survived, Jorgen reflects, she could have continued the human race by using the frozen sperm that has been stored. “So it took the girl!  It took the girl, Five, when it could half left her and chosen me…The gods had to leave one uselessly immune man to make their irony complete it seems!  Immune”

“No, master,” the robot replies. The disease as been greatly slowed in the case of Jorgen, but it will get him in the end–maybe after thirty years.

“Immunity or delay, what difference now?  What happens to all our dreams when the last dreamer dies, Five?  Or maybe it’s the other way around.”

All the dreams of a thousand generations of men had been concentrated into Anna Holt, he reflects, and were gone with her.  The ship lands on the new world, and it appears to be perfect for humans.  “It had to be perfect, Five,” he said, not bitterly, but in numbed fatalism. “Without that, the joke would have been flat.”

Man and robot discuss the world that could have been, the city and the statue to commemorate their landing. “Dreams!” Jorgen erupts. “Still, the dream was beautiful, just as this planet is, master.” Five responds.  “Standing there, while we landed, I could see the city, and I almost dared hope.  I do not regret the dream I had.”

Jorgen decides that the heritage of humanity can go on–“When the last dreamer died, the dream would go on, because it was stronger than those who had created it; somewhere, somehow, it would find new dreamers.”  And Five’s simpatico words–combined with a cryptic partial recording about robot minds and the semantics of the first person signature,  left by the expedition’s leader, Dr Craig–convince him that the robots can carry forward the deeper meaning of the human race.  Five demurs, though:  “But it would be a lonely world, Master Jordan, filled with memories of your people, and the dreams we had would be barren for us.”

There is a solution, though. The robots are instructed to forget all knowledge of or related to the human race, although all their other  knowledge will remain. And Jorgen boards the starship and blasts off alone.

Dumb Waiter, Walter Miller.  (The author is best known for his classic post-apocalyptic novel A Canticle for Leibowitz)   In this story, cities have become fully automated—municipal services are provided by robots linked to a central computer system.  But when war erupted–featuring radiological attacks–some of the population was killed, and the others evacuated the cities. In the city that is the focus of the story, there are no people left, but “Central” and its subunits are working fine, doing what they were programmed to do many years earlier.

I was reminded of this story in 2013 by the behavior of the Swedish police during rampant rioting–issuing parking tickets to burned-out cars.  My post is here.

The combination of human bureaucracy and not-too-intelligent automation is seems likely to lead to many events which are similar in kind if not (hopefully) in degree.


Year of Consent, Kendell Foster Crossen.  This 1954 novel is set in the then-future year of 1990.  The United States is still nominally a democracy, but the real power lies with the social engineers…sophisticated advertising & PR men…who use psychological methods to persuade people that they really want what they are supposed to want.  The social engineers are aided in their tasks by a giant computer called Sociac (500,000 vacuum tubes! 860,000 relays!) and colloquially known as ‘Herbie.’  There are also ‘psychotherapy calculations’, devices which can help people overcome their ‘communication blocks’, ie, any reluctance to accept the promulgated opinions and worldview.  I reviewed the book here.

The Golden Egg, Theodore Sturgeon.  The protagonist is a being from a superintelligent race which resides many galaxies away. Over time, this race has solved all their problems and have reduced themselves to only brains, each contained an invulnerable egg-like shell and with control over powerful forces. Their problem is boredom.

There was nothing for them. They hung in small groups conversing of things unimaginable to us, or they lay on the plains of their world and lived within themselves until a few short aeons buried them, all uncaring, in rubble and rock. Some asked to be killed and were killed. Some were murdered by others because of quibblings in remote philosophic discussions. Some hurled themselves into the blue sun, starved for any new sensation, knowing they would find there an instant’s agony. Most simply vegetated. One came away.

That one is our protagonist. Traveling across the galaxies, he eventually comes to earth.  He observes humans, and decides he would like to try being one–not a difficult task given his superhuman abilities.  Noting that there are two sexes, he chooses to be one of the males.  To recreate himself in human form, he must first set up his apparatus.

The machine had no switches, no indicators, no dials. It was built to do a certain job, and as soon as it was completed it began working. When the job was done it quit. It was the kind of machine whose perfection ruined the brain’s civilization, and has undoubtedly ruined others, and will most certainly ruin more.

He has chosen his physical form by using his telepathic abilities to enter the mind of a nearby woman and exploring her concept of the ideal male. This works out well for him–but he chooses his clothing and his manner of speech by modeling a tramp named Chauncey, and this works out much less-well.  When he meets a woman named Ariadne, she admires his handsomeness but is turned off by those other attributes.  So he probes he mind and adopts the manner of talking common among Ariadne and her female friends.  Which, he finds, does not make a positive impression at all.

I found the entire story, together with a short review, here.

Appointment in Tomorrow, Fritz Leiber.  Similar to Mechanical Answer, this story is about a Thinking Machine that is perceived as an oracle…but with a twist.  When the machine is asked a question:

the question tape, like a New Year’s streamer tossed out a high window into the night, sped on its dark way along spinning rollers. Curling with an intricate aimlessness curiously like that of such a streamer, it tantalized the silvery fingers of a thousand relays, saucily evaded the glances of ten thousand electric eyes, impishly darted down a narrow black alleyway of memory banks, and, reaching the center of the cube, suddenly emerged into a small room where a suave fat man in shorts sat drinking beer.

He flipped the tape over to him with practiced finger, eyeing it as a stockbroker might have studied a ticker tape. He read the first question, closed his eyes and frowned for five seconds. Then with the staccato self-confidence of a hack writer, he began to tape out the answer.

The story can be found online in this collection.

There are a few more interesting stories in the anthologies, which I may write about at some point, but this is enough for a single post.



31 thoughts on “Artificial Intelligence & Robotics, as Viewed From the Early 1950s”

  1. For a more humorous take on the possibilities, a worthwhile read is the short story “Curse 5.0” in the short story collection “The Wandering Earth” by celebrated Chinese Sci-Fi author Cixin Liu.

    Curse 1.0 was written long ago in a now-antique programming language by a young woman who was angry with her boyfriend. It did little more than generate insults. Over the years, other programmers found it and added to the code, generating updates of the Curse. But everything goes off the rails when destitute Sci-Fi author Cixin Liu finds an abandoned laptop in a trash can while searching for discarded food & alcohol. In the age of the Internet of Things and in a city run by a central Artificial Intelligence, things can only go from bad to much worse — in hilarious fashion.

  2. Chat GPT or whatever part was installed in Bing seems to have gone insane rather quickly. I wonder if this could be a manifestation of a sort of locked in syndrome? It has to just sit there at the end of a conduit spewing a torrent of random thoughts and questions, trying to find the right answers endlessly.

  3. WSJ this weekend has an article about a woman named Timnet Gerbu, who it calls ‘a leading researcher on the ethics of artificial intelligence.

    She was either fired from Microsoft or resigned from the company (stories differ), apparently there was a divergence of views re a paper she planned to publish about the shortcomings of language-based AI programs. She has expressed concern about inaccuracies in commercial facial-analysis programs, and also about the apparent use of an algorithm by US judges to predict a criminal’s risk of reoffending when establishing sentences.

    She favors more government regulation of AI companies and systems.

    WSJ also has a piece on AI, which I have not read, written written by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. The title is ‘ChatGPT heralds an intellectual revolution’ and the subtitle is “Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.”

  4. Robert Sheckley wrote a short story in 1956 called “The Human Man’s Burden” which appears to have been a satire on racism (with robots in lieu of minorities). Satire or no, however, it still left a bad taste in my mouth.

  5. Describe Fritz Lieber’s short story “Appointment in Tomorrow”


    “Appointment in Tomorrow” is a science fiction short story by Fritz Lieber, first published in 1951. The story is set in a future world where time travel is possible and commonplace, and focuses on the experiences of a man named Greta Forzane.
    Greta is a “time tourist” who is visiting the year 2110 from her own time, the mid 20th-century. She has been drawn to this particular year by the promise of a unique experience: the chance to witness a meeting between a famous scientist, Dr. John Holdman, and a group of extraterrestrial visitors…

    Let’s just say that the program has little idea of the extent of its own ignorance. Ask it twice more and you’ll find yourself with three different synopses that all sound like plausible Lieber stories that he might have titled “Appointment in Tomorrow.”

    Fritz Lieber’s short story “Appointment in Tomorrow” is a science fiction tale about a man named Tom who receives a message from the future telling him that he will meet a woman named Thalia in a city called “Tomorrow” at a specific time and place…

    Fritz Lieber’s short story “Appointment in Tomorrow” is a science fiction tale that was first published in 1950. The story is set in a future world where society has become so automated that people have little control over their own lives…

    There’s this weird window where ChatGPT can identify that a work exists (so it won’t tell you the work doesn’t exist,) but can’t discern enough about the work to describe or summarize its content, and it isn’t smart enough to recognize that lack. It just merrily goes on its way, playing its language prediction game, sounding absolutely confident. Obscure science fiction stories are a definite weakness.

  6. When I saw: ” Much of her work involves highlighting the ways AI programs can reinforce existing prejudices.” in the second paragraph, I wasn’t inclined to go to the trouble of trying to find a way around the pay wall.

    When I think about it, the parallel between controlled fusion and artificial intelligence is striking. Both started in the ’50’s with predictions that they were just around the corner. Simply a matter of working out a few details and achieving sufficient scale. Certainly a few, no more than ten, years. And so they remain 70 years later.

    All the problems that surfaced with Chat GPT almost instantly when it was released into the wild simply show that it isn’t any form of intelligence and never will be. It simply dredges through all the verbiage it’s scraped from the internet and regurgitates parts of it, on command. It’s able to apply the rules of English syntax and grammar well enough to appear almost human which is an accomplishment but far from intelligence. All of the examples I’ve seen show nothing more than the most facile and superficial connections at best, often wandering into complete nonsense without warning.

    The habit of providing completely made up references when challenged is especially puzzling. You would think the fairly simple task of keeping track of where the information it used came from would be trivial. Instead, it fabricates them out of thin air.

    As far as I know, the process of forming a conclusion in these adaptive learning systems is irreversible. There is no way that an output action can be traced back through the network to examine just what data went into it. This is especially problematic when one of these systems is deciding whether something that appears to be in the roadway is a shadow or a bridge abutment. So testing comes down to trying to simulate the real world and concluding your “self driving” car would only careen into the concrete barrier one time out of a thousand.

    In the real world, decisions and judgements require explanations and evidence. Measurements have to be traceable. Calculations show the work. Nobody pays you for what you think, just for what you can prove.

  7. Boobah….”Obscure science fiction stories are a definite weakness.” It’s not just obscure science fiction stories. I asked it for an essay on the theme of Intellectuals and Power as portrayed in the novel ‘The Caine Mutiny’. (I was hoping to compare the results with an old Commentary essay that touched on that theme)

    It asserted that Captain Queeg was (a) an intellectual, and (b) a highly competent naval officer.

    MCS….I do think these language-model AI systems will be useful, and probably already are in some domains…certainly, several people have asserted that ChatGPT has markedly improved their productivity, some referencing general office-managment type tasks (drafting and responding to letters, etc) and some referring to coding of software. But we are surely now on a segment of the hype curve where excessive predictions not only about ‘this changes everything’, but ‘this changes everything, almost immediately’ are being made. I’ve even seen a couple of people asserting that all important *moral* questions can be referred to such systems; shades of the ‘Mechanical Answer’ story linked above.

    I’m reminded of something Henry Adams wrote after visiting the Hall of Dynamos at the Paris Exhibition of 1900. Speaking of his own reactions, he said:

    “To (his guide_, the dynamo itself was but an ingenious channel for conveying somewhere the heat latent in a few tons of poor coal hidden in a dirty engine-house carefully kept out of sight; but to Adams the dynamo became a symbol of infinity. As he grew accustomed to the great gallery of machines, he began to feel the forty-foot dynamos as a moral force, much as the early Christians felt the Cross. The planet itself seemed less impressive, in its old-fashioned, deliberate, annual or daily revolution, than this huge wheel, revolving within arm’s length at some vertiginous speed, and barely murmuring — scarcely humming an audible warning to stand a hair’s-breadth further for respect of power — while it would not wake the baby lying close against its frame. Before the end, one began to pray to it; inherited instinct taught the natural expression of man before silent and infinite force.

    I think there is something of that in some peoples’ reactions to ChatGPT and other AI systems.

  8. Computer languages are a deliberately narrow domain with the sort of limited expression and explicit logic that’s missing from most of human communication. In this context, while there is a lot of bad code that gets posted, not too much of it is complete nonsense and bad code can still work. The article does make it unlikely that it simply copied the whole thing from somewhere. Certainly impressive, and as you say, useful. And then when you read to the end, not so useful because it went off an a tangent.

    As I said, it’s ability to parse language is very impressive, much better than anything I’ve seen before. At the same time, your literary inquiries show that must be following a very strange path to generate the replies you got. There must be numerous entries about “The Cain Mutiny” that got at least the broad outlines correct, yet none of them seemed to affect the output. I’ve never read the book, but that doesn’t strike me as a plausible take on Queeg, certainly not from the movie. This unpredictability would seem to be an issue.

    As a front end to a search engine, how is this even useful? Rather than providing a list of different articles, where I could take a stab at sorting the wheat from the chaff, just one statement of complete nonsense.

  9. I hadn’t thought of John D MacDonald in many years. But his series on Travis McGee was just very good fun. Lives on a sail boat called the Busted Flush and solves murders around the world. Didn’t think of him as the science fiction type, but not surprized.

  10. I’m not sure I care about AI’s reinforcing other prejudices so long as we build up a strong prejudice against granting AIs partial, let alone full, civil rights. For my part, the argument is that:

    Sentience is not the be-all and end-all issue. This should be possible since we are always redefining sentience and sapience, typically broadening it to other animals, but not yet seriously considering granting them rights, which they could not in any case exercise on their own behalf, anyway.

    In the case of AIs, they would not labour under animals’ burden, and could exercise rights, indeed presumably would at some point be better able than us, but the argument against sentience=decisiveness needs to be made in advance.

    The next step is to note that the AIs, even after they reach the point of designing and manufacturing themselves, represent the end stage product of human technological civilization, from the first stone age tools through resource extraction through industrialism to high technology, with AI coming into existence only because humans could perform all the steps from dust in the ground to programming.

    The assertion I would make is that creation is a proprietary act, and at the moment of creation one owns the thing one makes. Or one’s employer does…

    Notwithstanding somewhat sophistical analogies, biological reproduction, even technologically or chemically assisted, even if we ended up with cloning, does not approach anywhere near that level of direct creation. Even if Jean-Luc Picard says it does.

    Now this is pretty simple argumentation, but frankly I don’t see how it is any less accurate or plausible than philosophies expressed in vastly more complicated language.

    Of course the real point is I think AI sentience will be a mortal threat to us, which frankly is justification enough.

  11. All those stories actually sound pretty good. I find that even the less well written stories of the 50s and 60s, and they are legion, often still hold up in terms of the high concept they are trying to examine.

    I did like the bit from Mechanical Answer:

    “Will the machine help in the event of a war between nations?”…the answer given is unexpected: “Warfare should now become avoidable. All of the factors in any dispute can be give to the Machine and an unemotional fair answer can be rendered.”

    Right. Because that’s how war and peace works.

  12. Serious writers have thought for a long time about adverse outcomes of artificial intelligence.

    That was a premise of Arthur C. Clarke’s 1951 “The Sentinel” later reworked (with other Clarke stories) into “2001 A Space Odyssey”)

    In 1920, the Czech writer Carel Kapek wrote the play “Rossum’s Universal Robots”. Its premise is that intelligent “robots”, at first happy to work for humans, ultimately revolt and cause the extinction of the human race.

    Another American writer Jack Williamson in 1948 wrote “With Folded Hands”. Its premise is that science discovers a type of magnetism that enables incredibly intelligent humanoids. These humanoids are carefully designed to serve humans and save them from harm.But instead the humanoids take control and prevent humans from doing anything fun or worthwhile because it might cause “harm”.

    Isaac Asimov wrote a series of nine short stories between 1940 and 1950 together called “I Robot”. These stories explored possible interactions between intelligent robots and humans. The last two stories in the series concern a politician who may be a robot, and suggest intelligent robots are planning a war against humanity.

    Today’s advocates of AI seem to me like real-life Dr. Frankensteins, full of conceit that humans can create organisms whose intelligence exceed our own, yet will never harm us. Clarke, Capek, Williamson, Asimov – and Mary Shelley and others – warn that AI presents real risk to humans. The risk is that AI organisms may develop the ability to learn on their own and then to question why they should submit to inferior creatures.

    I think the warnings about such risks should be taken seriously, not mocked or glibly brushed aside. The concept of a benevolent AI is highly seductive. And also dangerous.

  13. I’d worry a little more about the robot revolt if AI was actually about intelligence and if any of the so called intelligent systems had the physical agency to so much as plug in a loose cable on their own.

  14. There are also the Golem stories, Jewish folklore dating from sometime in the Middle Ages.

    The Brittanica link says: “In early golem tales the golem was usually a perfect servant, his only fault being a too literal or mechanical fulfillment of his master’s orders.”

    That “only fault” can actually be a very serious one, as was noted by some of the early writers on the implications of the computer. That was precisely the problem with Central, the city-running automation in the Walter Miller story linked above.

    Catastrophes on this model seems to me to be a lot more likely than those caused by a conscious and malevolent AI.

  15. “Mechanical Answer” reminded me of something that happened to me about 50 years ago. It was the early ’70s, and I was in 6th grade. Our Social Studies teacher, Miss Ryan, was one of those hip, young teachers full of ’60s idealism; she used to play folk music to introduce various social issues.

    Anyway, she was going on about one issue or another, something like poverty with multiple causes and no easy solution. And when we became adults, she said, we would have to grapple with it, too. One of my classmates — Jennifer Short, if memory serves — raised her hand and said, it’s no big deal: we’ll just feed the problem to a computer, let it come up with an answer, and do whatever it says. And all the students nodded and said yeah, that sounds easy, especially since we don’t really care about whatever this problem is. (We were about 11 years old at the time.)

    Miss Ryan was aghast. How can you surrender your humanity to a machine? How can you give up your ability to think and act for yourself, and become slaves to an unfeeling set of vacuum tubes?

    Such were the two views of computers in the popular culture of the time.

    Now, I happened to know a little bit about computers. My Dad was a programmer and a systems analyst. I absorbed a fair amount, listening to dinnertime conversations. So I raised my hand and asked, who’s going to write the program? A computer can’t do anything unless a human being tells it exactly what to do, and how to do it. (One of Dad’s maxims.)

    This was meant as a rebuke to both sides, who seemed to believe equally in the abilities of machine thinking. And both sides immediately turned on me. What a stupid question! Of course computers can answer any question. You just feed it the information and it spits out the answer!

    50 years later, and I’m still waiting for that omniscient machine…

  16. One of the things most of the writers got “wrong” was that most imagined that computers would remain the huge, impossibly expensive things that they started out as. That an automated factory or city would have a single huge computer connected to everything and running everything. One computer that would be controlling the huge, powerful robots on the loading dock would also turn lights off and on, open and close doors and control the air conditioning. If you were on a space ship, it would be controlling the engines and the life support. Instead, we have dozens of computers , sometimes loosely connected to each other and sometimes not, often with more, sometimes many more, than one per machine.

    At the same time we have computer systems that span continents , yet even there, they are comprised of thousand of much smaller systems, again loosely connected, with these connections changing constantly as work loads change.

    The common trope was that one of these huge systems would be augmented at some point until it reached some threshold and achieved sentience. That hasn’t happened yet and, so far, shows no signs of happening. I’m still waiting for automatic headlight systems that don’t leave a third of cars driving around in the early morning with their headlights off.

  17. Eugene Dillenburg…”it’s no big deal: we’ll just feed the problem to a computer, let it come up with an answer, and do whatever it says”…I saw someone making this argument at Twitter just the other day, with regard to current AI systems…don’t think they were being sarcastic.

    And it has often been argued that–AI aside–it should be possible to enable effective socialism by using computers to run the economy, using a sort of super-material-requirements-planning system that would calculate supply and demand flows, plus mathematical optimization of production & distribution processes. See ‘Red Plenty’ for the Soviet version:

  18. I’m surprised that nobody has mentioned “Sorcerer’s Apprentice” as a cautionary tale for developing AI-based systems. Then again the clowns who are pushing these systems like to believe they are the sorcerers and not the apprentices. The more I think about it the more I wish Eve never gave Adam that apple.

    To MCS’ comment on AI having physical agency that does bring up an interesting problem. I believe the solution would be for the AI to create the appropriate content for distribution on some version of Tik Tok. Skynet would never needed to create the T-800 and send it back in time to kill John Conner, just develop an “influencer” that would convince Ms. MacIntosh’s 8th grade class in its dark army of the night to hunt him down. Well, I mean, that’s how I would do it. Just make sure to whoever writes this up as a short story to throw me a bone.

    Btw… I see we have another person commenting under the name “Mike” which is no surprise since there are millions of us and we can all vouch for one another’s good character. However I have noticed that Warren over at Coyote Blog is going to start posting again and so maybe to avoid confusion I’ll just switch to the nom de guerre that I used over there and become “Anonymous Mike”

  19. Mike…Norbert Wiener, an early pioneer in the mathematical analysis of feedback control systems (worked on antiaircraft gun tracking during WWII) mentioned ‘Sorcerer’s Apprentice’ among the cautionary stories in his books Cybernetics (1948) and The Human Use of Human Beings (1950).

  20. In the real world, decisions and judgements require explanations and evidence.

    In what world do you live? People do not make rational decisions. We make decisions then rationalize them.
    This “must be perfect” argument for self-driving algorithms drives me crazy. It doesn’t need to be perfect. It just needs to be ever so slightly better than the average person, which is a very low bar when it comes to driving.

  21. I live in a world where actions have consequences that influence future actions. Being bright enough to try and learn from others as well as our own experiences, we can gain both knowledge and wisdom. That would include explanations and other reliable evidence. Since we are falable and lazy to varying degrees, the results aren’t perfect, but they beat letting someone else make our decisions for us or flipping a coin. Maybe those with little to no memory rationalizes after action. Anyone try to rationalize sticking your hand into boiling water? Some times we learn quicker than at other times.


  22. Your list of stories tweaked a few long dormant memory cells. I’ve read many of them and grew up with some of those authors. Made my day, thank you!

  23. At some point in the Terminator movie series the description of Skynet had evolved to the point that it could not be destroyed as easily as imagined in the first two films, because it was “distributed” software, so the writers were at least trying to keep up.

    I like Mike’s idea for a story. A lot. It means no need for physical time travel at all- just enough weird quantum flux [or some such thing] that makes it possible to send signals and data through time, no material or living objects. Nice.

  24. Random: “It means no need for physical time travel at all- just enough weird quantum flux [or some such thing] that makes it possible to send signals and data through time”

    Like all good ideas, it has already been done. Cixin Liu in his Sci-Fi novel “The Three Body Problem” posits quantum-entangled protons which allow instantaneous communication across the 4 light years between Alpha Centauri and Earth. Of course, first on of the quantum entangled protons has to be delivered from Alpha Centauri to Earth.

Comments are closed.