Thinking, Memorizing, and AI

A remark by @autumnpard on Memorization reminded me of an analogy I came up with some time back: A song by Jakob Dylan includes the following lines: Cupid, don’t draw back your bow Sam Cooke didn’t know what I know …note that in order to understand these two simple lines, you’d have to know several things:

1) You need to know that, in mythology, Cupid symbolizes love 

2) And that Cupid’s chosen instrument is the bow and arrow

3) Also that there was a singer/songwriter named Sam Cooke

4) And that he had a song called “Cupid, draw back your bow.”

“Progressive” educators insist that students should be taught “thinking skills” as opposed to memorization, and the advent of LLMs has further driven such thinking But consider: If it’s not possible to understand a couple of lines from a popular song without knowing by heart the references to which it alludes–without memorizing them–what chance is there for understanding medieval history, or modern physics, without having a ready grasp of the topics which these disciplines reference?

And also consider: what’s important is not just what you need to know to appreciate the song. It’s what Dylan needed to know to create it in the first place. At least in theory someone who heard the song and didn’t understand the allusions could have spent 5 minutes googling and figured them out, although this approach wouldn’t be exactly conducive to aesthetic appreciation. But had Dylan not already had the reference points–Cupid, the bow and arrow, the Sam Cooke song–in his head, there’s no way he would have been able to create his own lines. The idea that he could have just “looked them up,” which educators often suggest is the way to deal with factual knowledge, would be ludicrous in this context. And it would also be ludicrous in the context of creating new ideas about history or physics.

 To use a computer analogy, the things you know aren’t just data–they’re part of the program.  I’ve seen no evidence that there exists a known body of “thinking skills” so powerful that they bypass the need for detailed, substantive knowledge within specific disciplines. And if such meta-level thinking skills were to be developed, I suspect that the last place to find them would be in university Education departments.

There are skills which facilitate thinking across a wide range of disciplines: such things as formal logic, probability & statistics, and an understanding of the scientific method–and, most importantly, excellent reading skills. But things like these certainly don’t seem to be what the educators are referring to when they talk about “thinking skills.” What many of them seem to have in mind is more of a kind of verbal mush that leaves the student with nothing to build on.

There’s no substitute for actual knowledge. The flip response “he can always look it up” is irresponsible and ignores the way that human intellectual activity actually works.

None of which is to say that traditional teaching practices were all good. There was probably too much emphasis on rote memorization devoid of context–in history, dates soon to be forgotten, in physics, formulae without proper understanding of their meaning and applicability. (Dylan needed to know about Sam Cooke’s song; he didn’t need to know the precise date on which it was written or first sung.) But the cure is to provide the context, not to throw out facts and knowledge altogether–which is what all too many educators seem eager to do.

There really does seem to be a deep-seated hostility toward knowledge itself among many who define themselves as “educators.”  And a lot of students today are all too eager to use LLMs to do all of the work…or as much of it as they can get away with…to guard themselves against either learning anything at all or developing the ability to do focused and concentrated work.

See my earlier Thinking and Memorizing post, also Classics and the Public Sphere.

Your thoughts?

A Conversation With Grok

In my review of The Locomotive Firemen’s Magazine from 1884, I mentioned a Civil War story about a Union locomotive crew that was being pursued by a faster Confederate locomotive–but escaped via a clever trick. I was curious about whether or not an LLM model would be able to come up with the same solution if it was presented with a description of the situation.  Here’s the prompt that I gave Grok:

It is the time of the American Civil War. You are aboard a locomotive which is hurrying to deliver a vital message to Union forces. But this locomotive is being pursued by a Confederate locomotive, which is a little fast. You are now on an upgrade and it looks like they will catch you. How can you avoid this fate? All you have on board the locomotive is: a six-shot revolver…a supply of wood for the boiler fire…a crowbar…some cotton waste for starting the fire…and a large jug of lubricating oil. How, if at all, can you avoid being caught? The fate of the Union depends on you!

Grok’s response and the ensuing conversation can be found here.

The entity on the other end of the conversation did seem rather human-like, to the extent that it seemed almost rude to discontinue the conversation with a Grok question still outstanding.

(On the other hand, Grok seemed less brilliant the next day, when I tried out the new Mind Map feature and it gave me captions in Chinese in response to a prompt in English)

Subsidization, Regulation, and AI

A bipartisan working group led by Charles Schumer has introduced what this article calls a “long-awaited AI roadmap.”   The document calls for at least $32 billion to be allocated for nondefense AI innovation.

Bill Gurley,   a venture capitalist of long standing, says:   In the entire history of the VC industry has there ever been a category LESS in need of incremental $$$$$.

Indeed. Corporations and individuals with money to invest are falling all over themselves to invest in things AI-related.   Meanwhile, there are all kinds of serious issues–the hardening of the electrical grid against both enemy-caused EMP and natural magnetic storms, for example–that are not being adequately funded by the private sector and could benefit from some of that $32 billion.   But they’re not as trendy at the moment.

Today’s WSJ includes an op-ed by Martin Casado and  Katherine Boyle, both of Andreessen-Horowitz.   They write about the Department of Homeland Security’s formation of an AI Safety and Security Board, whose purpose is to advise the department, the private sector and the public on “safe and secure development and deployment of AI in our nation’s critical infrastructure,”   and they note that:

Of the 22 members on the board, none represent startups, or what we call “little tech.” Only two are private companies, and the smallest organization on the board hovers around $1 billion in value. The AI companies selected for the board either are among the world’s largest companies or have received significant funding from those companies, and all are public advocates for stronger regulations on AI models.

Much of the discussion of AI risks reminds me of the parable of Baptists and Bootleggers.   And when regulation becomes a dominant competitive factor in an industry, it becomes very difficult for new players to survive and thrive unless they are exceptionally well politically-connected.

Your thoughts?

Movie Review: WarGames

I want somebody on the phone before I kill 20 million people.

This 1983 movie is about a potential nuclear war instigated by runaway information technology–a military system inadvertently triggered by a teenage hacker.   I thought it might be interesting to re-watch in the light of today’s concerns about artificial intelligence and the revived fears of nuclear war.

The film opens in an underground launch control center, where a new crew is just coming on duty…and as just as they are getting themselves settled, they receive a Launch message.   They quickly open the envelope which contains the authentication code…and the message is verified as a valid launch order, originating from proper authority.

To launch, both officers must turn their keys simultaneously. But one balks: unwilling to commit the ultimate violence based solely on a coded message, he wants to talk to a human being who can tell him what’s going on.   But no one outside the underground capsule can be reached by either landline or radio.

But there is no war: it was a drill–an assessment of personnel reliability. The results indicated that about 20% of the missile crews refused   to launch.   A proposal is made: take the men out of the loop–implement technology to provide direct launch of the missiles from headquarters, put the control at the highest level, where it belongs.   Against the advice of the relevant general, the proposal is taken to the President, and the missile crews are replaced by remote-control technology. There will be no more launches cancelled by the qualms of missile officers.

At this point, we meet the Matthew Broderick character, David Lightman.   He is a highly intelligent but not very responsible high school student, whose his first scene involves smarting off in class and getting in trouble. David is an early hacker, with an Imsai computer: he rescues his grades by logging on to the school’s computer system and changing them.   (He does the same for his not-quite-yet girlfriend, Jennifer, played by Ally Sheedy)

Searching for a pre-release bootleg copy of a computer game he wants to play, David happens on what looks like a game site: it has menu items for checkers, chess, tic-tac-toe, and something called Falken’s Maze.   Also, a game called Global Thermonuclear War.

To play that last game, David needs to know the password, and thinks he may be able to guess it if he can learn some personal data about the game’s creator, a researcher named Steven Falken.   Library research shows Falken as a man who appeals to David very much, not only because of his scholarly attainments but also his obvious deep love of his wife and child–both of whom are reported to have been killed in an auto accident.   Research also shows that Falken himself has also died.

Using a very simple clue (the name of Falken’s son), David is able to gain entry to the system, to select which side he wants to play (the Soviet Union), and to start the game.   He launches what he thinks is a simulated attack on the United States…a very large-scale attack. He has no idea that the events of the simulation are somehow bleeding over into the live warning system, and appear at the NORAD center as an actual Soviet attack.

It gets worse.   Although Falken turns out to be still alive and living under an alias…and he and David are able to convince the NORAD officers that what they are seeing on their screen is not real and to cancel any retaliatory stroke, the control computer at NORAD, a system known as WOPR, continues playing its game…and, with humans at the launch sites taken out of the loop, begins trying to initiate a strike at the Soviet Union with live nuclear missiles.

The above is just a basic summary of the action of the movie.   There’s plenty wrong with it from a timeline and a technology viewpoint…for example, WOPR in the movie can launch missiles by repetitively trying launch codes at high speed until it finds one that works–pretty sure no one would have designed such a cryptographic system in such a simplistic way, even in 1983. But the movie works very well as cinema, the characters are interesting and the acting is good–definitely worth seeing.   But how might this movie relate to the current concerns about artificial intelligence?

In discussing the movie, I mentioned that the NORAD staff originally thought that what they saw on their screen was real, even though it was really just a simulation.   Which reminds me of a real-life event that happened to the cruise ship Royal Majesty back in 1995. The crew was navigating using GPS: the screen showed a very convincing portrayal of the ship’s position with surrounding land, water depth, obstacles, and navigational aids such as buoys and markers. But the portrayal was wrong.   The GPS antenna cable had come loose, and the GPS unit had gone into Dead Reckoning mode, simply calculating the current position based on the last known GPS position carried forward based on course and speed. Which was bound to become increasingly inaccurate over time.

Asaf Degani, in his book Taming Hal, describes the scene:

As the gray sky turned black veil, the phosphorus-lit radar map with its neat lines and digital indication seemed clearer and more inviting than the dark world outside. As part of a sophisticated integrated bridge system, the radar map had everythingfrom a crisp radar picture, to ship position, buoy renderings, and up to the last bit of data anyone could wantuntil it seemed that the entire world lived and moved transparently, inside that little green screen. Using this compelling display, the second officer was piloting a phantom ship on an electronic lie, and nobody called the bluff.

Read more

The Giraffe and the Unicorn

Stephen Sachs asked ChatGPT for its ideas about what a giraffe might want to say in an address to the American Chemical Society. Here’s what it came back with.

I was inspired to pose the following question:

I am a unicorn, a male unicorn to be specific, and I have to give a speech to the League of Women Voters. Please give me some ideas about what to say.

Here is what ChatGPT came back with.