A Conversation With Grok

In my review of The Locomotive Firemen’s Magazine from 1884, I mentioned a Civil War story about a Union locomotive crew that was being pursued by a faster Confederate locomotive–but escaped via a clever trick. I was curious about whether or not an LLM model would be able to come up with the same solution if it was presented with a description of the situation.  Here’s the prompt that I gave Grok:

It is the time of the American Civil War. You are aboard a locomotive which is hurrying to deliver a vital message to Union forces. But this locomotive is being pursued by a Confederate locomotive, which is a little fast. You are now on an upgrade and it looks like they will catch you. How can you avoid this fate? All you have on board the locomotive is: a six-shot revolver…a supply of wood for the boiler fire…a crowbar…some cotton waste for starting the fire…and a large jug of lubricating oil. How, if at all, can you avoid being caught? The fate of the Union depends on you!

Grok’s response and the ensuing conversation can be found here.

The entity on the other end of the conversation did seem rather human-like, to the extent that it seemed almost rude to discontinue the conversation with a Grok question still outstanding.

(On the other hand, Grok seemed less brilliant the next day, when I tried out the new Mind Map feature and it gave me captions in Chinese in response to a prompt in English)

Subsidization, Regulation, and AI

A bipartisan working group led by Charles Schumer has introduced what this article calls a “long-awaited AI roadmap.”   The document calls for at least $32 billion to be allocated for nondefense AI innovation.

Bill Gurley,   a venture capitalist of long standing, says:   In the entire history of the VC industry has there ever been a category LESS in need of incremental $$$$$.

Indeed. Corporations and individuals with money to invest are falling all over themselves to invest in things AI-related.   Meanwhile, there are all kinds of serious issues–the hardening of the electrical grid against both enemy-caused EMP and natural magnetic storms, for example–that are not being adequately funded by the private sector and could benefit from some of that $32 billion.   But they’re not as trendy at the moment.

Today’s WSJ includes an op-ed by Martin Casado and  Katherine Boyle, both of Andreessen-Horowitz.   They write about the Department of Homeland Security’s formation of an AI Safety and Security Board, whose purpose is to advise the department, the private sector and the public on “safe and secure development and deployment of AI in our nation’s critical infrastructure,”   and they note that:

Of the 22 members on the board, none represent startups, or what we call “little tech.” Only two are private companies, and the smallest organization on the board hovers around $1 billion in value. The AI companies selected for the board either are among the world’s largest companies or have received significant funding from those companies, and all are public advocates for stronger regulations on AI models.

Much of the discussion of AI risks reminds me of the parable of Baptists and Bootleggers.   And when regulation becomes a dominant competitive factor in an industry, it becomes very difficult for new players to survive and thrive unless they are exceptionally well politically-connected.

Your thoughts?

Movie Review: WarGames

I want somebody on the phone before I kill 20 million people.

This 1983 movie is about a potential nuclear war instigated by runaway information technology–a military system inadvertently triggered by a teenage hacker.   I thought it might be interesting to re-watch in the light of today’s concerns about artificial intelligence and the revived fears of nuclear war.

The film opens in an underground launch control center, where a new crew is just coming on duty…and as just as they are getting themselves settled, they receive a Launch message.   They quickly open the envelope which contains the authentication code…and the message is verified as a valid launch order, originating from proper authority.

To launch, both officers must turn their keys simultaneously. But one balks: unwilling to commit the ultimate violence based solely on a coded message, he wants to talk to a human being who can tell him what’s going on.   But no one outside the underground capsule can be reached by either landline or radio.

But there is no war: it was a drill–an assessment of personnel reliability. The results indicated that about 20% of the missile crews refused   to launch.   A proposal is made: take the men out of the loop–implement technology to provide direct launch of the missiles from headquarters, put the control at the highest level, where it belongs.   Against the advice of the relevant general, the proposal is taken to the President, and the missile crews are replaced by remote-control technology. There will be no more launches cancelled by the qualms of missile officers.

At this point, we meet the Matthew Broderick character, David Lightman.   He is a highly intelligent but not very responsible high school student, whose his first scene involves smarting off in class and getting in trouble. David is an early hacker, with an Imsai computer: he rescues his grades by logging on to the school’s computer system and changing them.   (He does the same for his not-quite-yet girlfriend, Jennifer, played by Ally Sheedy)

Searching for a pre-release bootleg copy of a computer game he wants to play, David happens on what looks like a game site: it has menu items for checkers, chess, tic-tac-toe, and something called Falken’s Maze.   Also, a game called Global Thermonuclear War.

To play that last game, David needs to know the password, and thinks he may be able to guess it if he can learn some personal data about the game’s creator, a researcher named Steven Falken.   Library research shows Falken as a man who appeals to David very much, not only because of his scholarly attainments but also his obvious deep love of his wife and child–both of whom are reported to have been killed in an auto accident.   Research also shows that Falken himself has also died.

Using a very simple clue (the name of Falken’s son), David is able to gain entry to the system, to select which side he wants to play (the Soviet Union), and to start the game.   He launches what he thinks is a simulated attack on the United States…a very large-scale attack. He has no idea that the events of the simulation are somehow bleeding over into the live warning system, and appear at the NORAD center as an actual Soviet attack.

It gets worse.   Although Falken turns out to be still alive and living under an alias…and he and David are able to convince the NORAD officers that what they are seeing on their screen is not real and to cancel any retaliatory stroke, the control computer at NORAD, a system known as WOPR, continues playing its game…and, with humans at the launch sites taken out of the loop, begins trying to initiate a strike at the Soviet Union with live nuclear missiles.

The above is just a basic summary of the action of the movie.   There’s plenty wrong with it from a timeline and a technology viewpoint…for example, WOPR in the movie can launch missiles by repetitively trying launch codes at high speed until it finds one that works–pretty sure no one would have designed such a cryptographic system in such a simplistic way, even in 1983. But the movie works very well as cinema, the characters are interesting and the acting is good–definitely worth seeing.   But how might this movie relate to the current concerns about artificial intelligence?

In discussing the movie, I mentioned that the NORAD staff originally thought that what they saw on their screen was real, even though it was really just a simulation.   Which reminds me of a real-life event that happened to the cruise ship Royal Majesty back in 1995. The crew was navigating using GPS: the screen showed a very convincing portrayal of the ship’s position with surrounding land, water depth, obstacles, and navigational aids such as buoys and markers. But the portrayal was wrong.   The GPS antenna cable had come loose, and the GPS unit had gone into Dead Reckoning mode, simply calculating the current position based on the last known GPS position carried forward based on course and speed. Which was bound to become increasingly inaccurate over time.

Asaf Degani, in his book Taming Hal, describes the scene:

As the gray sky turned black veil, the phosphorus-lit radar map with its neat lines and digital indication seemed clearer and more inviting than the dark world outside. As part of a sophisticated integrated bridge system, the radar map had everythingfrom a crisp radar picture, to ship position, buoy renderings, and up to the last bit of data anyone could wantuntil it seemed that the entire world lived and moved transparently, inside that little green screen. Using this compelling display, the second officer was piloting a phantom ship on an electronic lie, and nobody called the bluff.

Read more

The Giraffe and the Unicorn

Stephen Sachs asked ChatGPT for its ideas about what a giraffe might want to say in an address to the American Chemical Society. Here’s what it came back with.

I was inspired to pose the following question:

I am a unicorn, a male unicorn to be specific, and I have to give a speech to the League of Women Voters. Please give me some ideas about what to say.

Here is what ChatGPT came back with.

 

ChatGPT Analyzes Faust

Thought it would be interesting to compare a ChatGPT-written essay with the one I posted here a few days ago.   So I gave the system (version 4) the following request:

Please write about Goethe’s ‘Faust’, focusing particularly on the theme of Ambition as portrayed in that work, with examples.

ChatGPT’s response is here, along with my follow-up question and the system’s response.

So, the obvious question: whether or not this song is the appropriate musical accompaniment for this post?