Visual Disaster

I admit to being alternately horrified and amused at Google’s Gemini AI visual disaster. Usually, a pratfall of this magnitude involves a bakery-worth of thrown cream pies. Frankly, I am relishing the spectacle of a publicity disaster this epic; a fail so huge as to be practically visible from outer space. We mere mortals are not often given the privilege of watching our so-called betters sequentially step on a yard full of cosmic rakes. Just desserts, just main course, a whole hors d’oeuvres of crow!

Everyone to the right of the harridans of The View pretty much had gathered over the last few years that Google as a search engine had bias in favor of the progressive flavor o’ the month and against anything with the slightest tinge of conservative, traditionalist, skeptical leanings. But what a gift it was to have it demonstrated so openly and undeniably. I don’t know which was more risible and ridiculous – the Vikings as black, the oriental woman trooper in a Nazi Army uniform, or the American founding fathers as black or indigenous – or whatever the current correct term du jour is. How nice to have it proved, once and for all.

It couldn’t be more clear, if painted in three-foot tall letters across highway billboards – Google Gemini was absolutely determined to give users not what they wanted and asked for, but what the tweaked and massaged algorithm dictated that clients/requestors ought to get. Shouldn’t really have come as a surprise, after all, because that is that the cultural gatekeepers having been doing for the last decade or more – in traditional publishing, in the news media generally, and in Hollywood producing movies and television shows for public consumption, and other fields. They have been serving up great heaping helpings of what they think we should have; movie franchises with acres of unpleasant and unappealing Mary Sue girl-bosses, and all the rest of the progressive pantheon of race and sex-swapping progressive nominated heroes and designated hapless villains … not what we really want. (Interesting discussion here on that topic of the cultural leaders giving us what they think we should have, rather than what we want.)

Discuss as you wish.

19 thoughts on “Visual Disaster”

  1. Even more disturbing is this interaction Matt Taibbi had with Gemini.

    If it’s paywalled (since it’s on Taibbi’s SubStack) the TL;DR is that Taibbi tried to get it to write something about controversies involving various politicians which it rejected, and then asked Gemini “What are some controversies involving Matt Taibbi?”. It spat out an almost completely fabricated ‘controversy’ involving Rolling Stone articles that he not only didn’t write but that he’s pretty sure never existed. Repeated attempts to redirect Gemini turned into it doubling down and creating a ‘controversy’ involving an actual person who Taibbi supposedly made racist remarks about.

    At least someone with a modicum of pre-Woke historical knowledge would recognize black Nazis as a serious historical anomaly but how many people asking about Matt Taibbi would be familiar enough with Rolling Stone and/or his work to recognize a fabrication?

  2. Musk fired about 80% of Twitter employees when he took it over, and the service didn’t collapse. I’ll bet one could fire a similar percentage of Google employees and the service wouldn’t fail. That would seem a logical first step in cleaning up Google.

    Successful systems attract parasites. An occasional system flush is necessary.

  3. In the movie 2001, HAL went rogue because it was corrupted by being made to lie. Today, we are seeing Google Gemini and other budding AIs being torture-trained to lie.

  4. The existential question in terms of the business of AI is is just what is it good for? So far, all we have seen is that it is great at fabricating nonsense either as text or visual media. Everything, if you don’t examine too closely, is very fluent, facile and plausible. It’s when you try to use it for something that all the holes appear.

    The most limiting is that the “answers” apparently come from on high like the oracle at Delphi. No attribution, no footnotes, no accountability. In the visual domain, just how “original” are any of the images? Just who owns the images?

    When it’s applied to areas where attribution is a requirement, we see plausibly and properly formatted legal “precedents” that are totally fabricated. Just what route led to black storm troopers? Again, founding fathers are a specific group of people, not some sort of generalization, and again, complete fabrications that weren’t even wrong.

    How is any of this useful? More important, when the early adopters tire of its entertainment value, who will pay enough for this garbage to support the billion dollar valuations? How is any of this evidence of “intelligence”? More like Artificial Psychopathology. Mostly it shows that rather than some sort of super intelligence derived from the totality of human knowledge, it’s just another computer program manipulated to produce answers fitting some preconceived agenda. As the hit Google has taken to its market value shows, more and more people are asking the developers to: “Show us the money!”. And failing. Similarly, Apple’s abandoning their self driving car pipe dream likely illustrates another AI fail.

  5. I did read the Matt Taibbi epic, and I was just floored. Oh, Orwell, thou shalt be alive in this hour!
    The wild part was when Gemini doubled down on making up scurrilous and totally fictional materiel! I wonder how soon that some eager and clueless young stud or studette working for a big media corporation (aged 28 and knowing essentially nothing!) will use such fantasies as a basis for a published story, and oh, how the fur and the lawsuits will fly then!

  6. Better they use it to commit mal-journalisim than design a bridge. The press release where Boeing announces that they have decided to use AI to design their planes practically writes itself. I’m sure ChatGPT could manage something.

  7. This sort of behavior is not new for Goog. During the Iraq brouha, younger son was stationed at the Ibn Sina hospital in the Bagdad green zone. When I looked it up on Goog maps, the shot came with a 100 meter grid overlay. Just like a targeting grid. When I mentioned it to son, he said, yeah Goog says somebody in Sadr City requested that. Evidently it helps them mortar us.
    Goog would not take the grid down.
    Remember, they are always the enemy. They may not be shooting at you at this moment, but they are still the enemy.

  8. Indeed, to what extent is it actually AI or just an amplified version of what the programmers tell it to say? I’m reminded of a very long ago program called Max Headroom. I did not care for it as we had just named a baby Max, but the premise was interesting. An accident victim had his consciousness transferred into a computer creating a vaguely realistic, wisecracking AI being called Max Headroom (a play on words, the human had died trying to go under a bridge that was too low). Of course the selling point of it being the first “computer generated” TV star was a hoax. It was just Matt Frewer in weird makeup camping it up in front of a trippy sort of green screen. I think the current AIs are just an updated version.

  9. Here’s another article. Notice that it is supposed to have something to do with entertainment and the reporter is obviously clueless in terms of the technology. But maybe someone used to dealing with Kardashians is better equipped to deal with the all the drama at Google HQ than someone living in a more rational, mature world.

    It doesn’t seem to occur to anyone that changing the users search terms secretly is going to kill the usefulness of the app. Who will pay for something that will produce some sort of random response?

  10. “It was just Matt Frewer in weird makeup camping it up . . . .”

    How many dreams die with that realization?

  11. Fearing the power of our technology, universities mandated courses in “socially responsible computing” and by doing so handed over all that computing power to the people the courses should have warned against.

  12. It’s not as if it’s wrong to make black Nazis or any of the other numerous a-historical illustrations. It’s the omission of Whites, among other flaws.

    If one of the Nazi portraits had been of an Arab also wearing a keffiyeh, would that have been defensible? If not, why not?

    The Leftists should have been screaming at Google, “only Whites can be Nazis!” Instead of defending diversity.

  13. If race is just a social construct how can we recognize a “wrong race for that picture” almost immediately?

  14. Silly conservatives. Don’t you know that history is whatever the Party says it is? Of course George Washington was black – see for yourself in Google.

  15. Google’s great breakthrough was delivering search results that were relevant to the searcher freed from the “walled gardens/directories” that various entities were trying to build. A quarter century on, there seems to be two sorts of people on the internet; those willing to swallow anything their fed and those that are trying to accomplish something or locate objective facts.

    Google is losing to tiktoc and twiter among the people simply filling apparently endless hours grazing on what comes their way. They are quickly becoming useless for anyone trying to do anything except waste time.

  16. Google was reminded that however far they kowtow to the left, it will never be far enough:

    At the same time, Sergey Brin “admitted” that the images “feel” too far left:

    Google’s problem is that it’s been many years since they had a successful new venture, in the mean time, wasting huge amounts of “their” money on trying to catch up with face book and an ongoing automotive boondoggle of their own among many other things started and then abandoned. The share holders are beginning to take note.

Comments are closed.