"Restore(s) a little sanity into current political debate" - Kenneth Minogue, TLS "Projects a more expansive and optimistic future for Americans than (the analysis of) Huntington" - James R. Kurth, National Interest "One of (the) most important books I have read in recent years" - Lexington Green
Chicago Boyz is an Amazon and B&H Photo affiliate and earns money when you make Amazon or B&H purchases after clicking on an Amazon or B&H link on this blog.
Chicago Boyz is also a BlogAds affiliate and may earn money from advertising placed on this blog through the BlogAds network.
Some Chicago Boyz advertisers may themselves be Amazon affiliates who earn money from any Amazon purchases you make after you click on an Amazon link on their ad on Chicago Boyz or on their own web sites.
Chicago Boyz occasionally accepts direct paid advertising for goods or services that in the opinion of Chicago Boyz management would benefit the readers of this blog. Please direct any inquires to
Chicago Boyz is a registered trademark of Chicago Boyz Media, LLC. All original content on the Chicago Boyz web site is copyright 2001-2016 by Chicago Boyz Media, LLC or the Chicago Boyz contributor who posted it. All rights reserved.
In my previous post of this series, I remarked that most discussion of the employment effects of robotics/artificial intelligence/etc seems to be lacking in historical perspective…quite a few people seem to believe that the replacement of human labor by machinery is a new thing.
This post will attempt to provide some historical perspective on today’s automation technologies by sketching out some of the past innovations in the mechanization of work, focusing on “robots,” broadly-defined…ie, on technologies which to some degree involve the replacement or augmentation of human mind/eye/hand, rather than those that are primarily concerned with the replacement of human and animal muscular energy…and will discuss some of the political debate that took place on mechanization & jobs in the 1920s through 1940s.
Throughout most of history, the production of yarn for cloth was an extremely labor-intensive process, done with a device called a distaff, almost always employed by women, and requiring many hours per day to generate a little bit of product. (There even exists a medieval miniature of a woman spinning with the distaff while having sex…whether this is a comment on the burdensomeness of the yarn-making process, or a slam at the love-making skills of medieval men, I’m not sure—-probably both.) Eventually, probably around 1400-1500 in most places in Europe, the spinning wheel came into use, improving the productivity of yarn-making by a factor estimated from 3:1 to as much as ten or more to one.
Gutenberg’s printing press was invented somewhere around 1440. I haven’t seen any estimates of its effect on labor productivity, compared with the then-prevailing method of hand copying of manuscripts, but surely it was at least 1000 to 1 or more.
The era from 1700-1850 was marked by tremendous increases in the productivity of the textile trades. The flying shuttle and other advances greatly improved the weaving process; this created a bottleneck in the supply of yarn, which was partly addressed by the invention of the Spinning Jenny–a foot-powered device that could improve the yarn production of one person by 5:1 or better. Power spinning and power looms yielded considerable additional productivity improvements.
An especially interesting device was the Jacquard Loom (1802), which used punched cards to direct the weaving of patterned fabrics. In its initial incarnation, the Jacquard was a hand loom: its productivity did not come from the application of mechanical power but rather from the automation of the complex thread-selection operations previously carried out by a “Draw Boy.”
Turning now to woodworking: in 1818, Blanchard’s Copying Lathe automated the production of complex shape–a prototype was automatically traced and copied. It was originally intended for making gunstocks, but also served in producing lasts for shoemakers, and I believe also chair and table legs.
Another major advancement in the clothing field was the sewing machine. French inventory Barthelemy Thimonnier invented a machine in 1830, but was driven out of the country by enraged tailors and political instability. The first commercially-successful machines were invented/marketed by Americans Walter Hunt, Elias Howe, and Isaac Singer, and were in common use by the 1850s.
By the late Victorian period the sewing machine had been hailed as the most useful invention of the century releasing women from the drudgery of endless hours of sewing by hand. Factories sprung up in almost every country in the world to feed the insatiable demand for the sewing machine. Germany had over 300 factories some working 24 hours a day producing countless numbers of sewing machines.
The beginnings of data communications could be seen in gold ticker and stock ticker systems created by Edison and others (circa 1870) , which relayed prices almost instantaneously and eliminated the jobs of the messenger boys who had previously been the distribution channel for this information. Practical calculating machines also appeared in the 1870s. But the big step forward in mechanized calculation was Hollerith’s punched card system (quite likely inspired in part by the Jacquard), introduced in 1890 and used for the tabulation of that year’s census. These systems were quickly adopted for accounting and record keeping purposes in a whole range of industries and government functions.
Professor Amy Sue Bix, in her book Inventing Ourselves out of Jobs?, describes the fear of technological unemployment as silent movies were replaced by the ‘talkies’. “Through the early 1920s…local theaters had employed live musicians to provide accompaniment for silent pictures. Small houses featured only a pianist or violinist, but glamorous ‘movie places’ engaged full orchestras.” All these jobs were threatened when Warner Brothers introduced its Vitaphone technology, with prerecorded disks synchronized to projectors. “Unlike other big studios, Warner did not operate its own theater chains and so had to convince local owners to screen their productions. Theater managers would be eager to show sound movies, Harry Warner hoped, since they could save the expense of hiring musicians.”
The American Federation of Musicians mounted a major PR campaign in an attempt to convince the public that ‘living music’ was better than ‘canned sound.’ A Music Defense League was established, with membership reaching 3 million…but the ‘talkies’ remained popular, and the AFM had to admit defeat. A lot of musicians did lose their jobs.
Here’s a new factory for making automobile frames, specifically designed to minimize the need for human labor. The CEO of the company that built it actually said, “We set out to build automobile frames without people.”
At the start of the process, rough steel plates are inspected by electronic sensors, automatically pushing aside any that deviate from tolerances. Conveyors take the plates through punching, pressing, assembling, and nailing machines, as well as a machine that can insert 60 rivets simultaneously in each frame. A set of finishing machines then rinse, dry, spray-paint, and cool the frames. Aside from a few men moving frames between conveyor belts, the floor routine of the plant requires almost no hand labor.
And today’s robotics and artificial-intelligence advances go far beyond automating routine manufacturing labor and take over the kind of cognitive functions once thought to be exclusive to human beings. Here, for example, is a new AI-based system that displaces much of the thought-work which has been required of the people operating railway switch and signal installations:
The NX control machine is in effect the “brain” of the system. It automatically selects the best optional route if the preferred route is occupied. It will allow no conflicting routes to be set up. It eliminates individual lever control of each switch and signal.
Pretty scary from the standpoint of maintaining anything like full employment, don’t you think?
As smartphones become more powerful and more connected there are subtle phenomenon that are very powerful that can go by unnoticed. For years I either walked to work or took public transit but now in the Pacific Northwest I commute by car. Since the surroundings are new I pay much more attention to what is going on than I used to in Chicago.
In Chicago, there aren’t a lot of opportunities to optimize your travel if you are driving alongside major roads such as I290 or the Dan Ryan. Unless you really, really know what you are doing it is not recommended to get off the highway in many Chicago neighborhoods and just to follow your mobile navigation blindly. Thus in Chicago when I was in bad traffic it pretty much looked like this – a speed of zero and stuck crawling ahead.
The first generation of car navigation tools told you how to get somewhere with the most efficient route, taking standard traffic into account. The new generation of navigation apps, however, have real-time information and continuously re-adjust the “recommended” route based on traffic, accidents and construction. Read the rest of this entry »
Writing in today’s WSJ, Peggy Noonan says: “This year I am seeing something, especially among the young of politics and journalism. They have received most of what they know about political history through screens They’re college graduates…they’re bright and ambitious, but they have seen the movie and not read the book….They learned through sensation, not through books, which demand something deeper from your brain. Reading forces you to imagine, question, ponder, reflect…Watching a movie about the Cuban Missile Crisis shows you a drama. Reading about it shows you a dilemma.”
The article reminded me of Neal Stephenson’s book and of this post, which I originally ran in late 2007.
My post today is inspired by In the Beginning was the Command Line, by Neal Stephenson, a strange little book that will probably be found in the “computers” section of your local bookstore. While the book does deal with human interfaces to computer systems, its deeper subject is the impact of media and metaphors on thought processes and on work.
Stephenson contrasts the explicit word-based interface with the graphical or sensorial interface. The first (which I’ll call the textual interface) can be found in a basic UNIX system or in an old-style PC DOS system or timesharing terminal. The second (the sensorial interface) can be found in Windows and Mac systems and in their respective application programs.
As a very different example of a sensorial interface, Stephenson uses something he saw at Disney World–a hypothetical stone-by-stone reconstruction of a ruin in the jungles of India. It is supposed to have been built by a local rajah in the sixteenth century, but since fallen into disrepair.
The place looks more like what I have just described than any actual building you might find in India. All the stones in the broken walls are weathered as if monsoon rains had been trickling down them for centuries, the paint on the gorgeous murals is flaked and faded just so, and Bengal tigers loll among stumps of broken columns. Where modern repairs have been made to the ancient structure, they’ve been done, not as Disney’s engineers would do them, but as thrifty Indian janitors would–with hunks of bamboo and rust-spotted hunks of rebar.
In one place, you walk along a stone wall and view some panels of art that tell a story.
…a broad jagged crack runs across a panel or two, but the story is still readable: first, primordial chaos leads to a flourishing of many animal species. Next, we see the Tree of Life surrounded by diverse animals…an obvious allusion (or, in showbiz lingo, a tie-in) to the gigantic Tree of Life that dominates the center of Disney’s Animal Kingdom…But it’s rendered in historically correct style and could probably fool anyone who didn’t have a PhD in Indian art history.
The next panel shows a mustachioed H. sapiens chopping down the Tree of Life with a scimitar, and the animals fleeing every which way. The one after that shows the misguided human getting walloped by a tidal wave, part of a latter-day Deluge presumably brought on by his stupidity.
The final panel, then, portrays the Sapling of Life beginning to grow back, but now man has ditched the edged weapon and joined the other animals in standing around to adore and praise it.
Clearly, this exhibit communicates a specific worldview, and it strongly implies that this worldview is consistent with traditional Indian religion and culture. Most viewers will assume the connection without doing further research as to its correctness or lack thereof.
I’d observe that as a general matter, the sensorial interface is less open to challenge than the textual interface. It doesn’t argue–doesn’t present you with a chain of facts and logic that let you sit back and say, “Hey, wait a minute–I’m not so sure about that.” It just sucks you into its own point of view.
I started out as a Windows user and was actually a Windows programmer (using MS Access) for quite a long time. I resisted the siren call of Apple products and stuck with Windows for years and years, for work and for personal use.
Finally, I gave in and bought a MacBook Pro in 2011 which turned out to be a great purchase (and got rid of my Windows Desktop PC). I always had an iPhone for my personal cell phone and when I turned in my work Blackberry (a sad day at the time) for an iPhone, that meant that I had two iPhones. For a while I also used a Mac at work, although I ended up switching back to a Windows laptop because password resets, system upgrades and a lack of compatibility for applications built for Windows made it too much of a pain in the rear. Mac laptops still struggle in the corporate world.
Then over the years I of course bought an iPad and then upgraded that iPad, and an Apple Watch, which I really like (although the jury is mixed on that one). Here is an Apple Watch article and review that I wrote.
Thus I now have five (5) Apple products – a MacBook Pro, an iPad, an Apple Watch, and two iPhones. And now it is time for all the updates… iOS 10 is out now which means I need to update my iPad and both iPhones. Apple Watch OS 3 is also out and I am downloading that right now (downloading the operating system into the watch, from the iPhone, seems to take a long time). My MacBook Pro will get updated to the new Sierra OS when it comes out on Tuesday, September 20th.
There probably aren’t too many TV series centered around a CNC machine shop…but there’s at least one, and it’s called Titans of CNC. The producer and central figure, Titan Gilroy–yes, that’s his real name–grew up in rough circumstances, spent some time in prison, and eventually learned machine-tool operation and CNC programming. With these skills in hand, he built a pretty substantial business, Titan America, which is focused on precision machining, mainly producing components of products being made by larger companies.
The program is about the challenges involved in the operation of Titan America and a portrait of some of its employees and customers. It is also a passionate argument for the importance of manufacturing in America. Sponsors include Autodesk, IMCO Carbide Tools, Haas Automation and GoEngineer.
The series was made for a cable channel called MATV, which is owned by Lucas Oil Products and is targeted towards car people. It’s available on Amazon streaming, which is where I’ve been watching it.
Due to the fact that computing power continues to increase exponentially, devices that once were out of reach for the general population are now becoming mainstream. I wrote about Netatmo, a device that measures temperature, humidity and sound (indoor and outdoor) here. Due to the internet, these devices can also be connected together in order to see a real-time version of the country, without having to look at a weather forecast.
Recently I saw an article in an MIT journal about indoor air quality which described how cooking eggs aggravated the authors’ asthma and they were able to take specific actions because they were able to pinpoint the source of the spike in unclear air. The name of the company that created the monitor is called Speck and it was sold for approximately $200 so I thought that was a decent price point for me to join the air quality monitoring revolution. I am specifically most interested in INDOOR air quality but I will explain the broader context and then come back to the specific items I am reviewing (basically you can get official measurements of air quality in the US from public sources).
If you want to slay the mistaken talk about the end of human employment, hold a contest. Come up with labor demand boosting ideas that we do not engage in today because we either don’t have enough people or don’t have enough money to do it. Weigh jobs that don’t require much intelligence or education as more valuable than those requiring high education/intelligence. Within a year I predict enough entries to be submitted to put the entire world to work multiple times over.
It is a bit embarrassing to think about things we are too poor to do. This makes these jobs invisible to us today. By creating a contest and an artificial market for these ideas, they become visible and we turn from despair at the jobless future to wondering how we can become efficient enough to afford to do all these wonderful things.
Let’s prototype the contest here, among friends (and a few special adversaries and maybe even some enemies), and maybe we can roll it out later on a larger scale. The winner will receive a microscopic amount of fame, and also a virtual certificate, not suitable for framing.
What are the things that we collectively and individually can’t afford–but might be able to afford given higher levels of productivity and national income–that would meaningfully affect well-being and human satisfaction? Define “things” as broadly as you like. Consider both things that could become more affordable due to productivity improvements in a specific industry, and things whose creation might not by itself be meaningfully improvable from a productivity standpoint but which people could better afford given an upward trend in overall productivity and income.
Every day, there are articles and blog posts about how quickly robots are replacing jobs, particularly in manufacturing. These often include assertions along the lines of “robots are replacing human labor so rapidly and so completely that it doesn’t really matter whether the factories are in the US or somewhere else.” There are also many assertions that robotics and artificial intelligence will triumph so completely that we must accept that we will permanently have a huge unemployed population who will need to be paid a “basic income” of some sort from the government.
This May, there were breathless headlines about how Foxconn, which is Apple’s primary contract manufacturer, was replacing 60,000 workers with robots–indeed, in some tellings, had already replaced them. If you google “foxconn 60000 workers”, you will get about 130,000 hits.
But the story, however, is false; indeed, it did not even originate with Foxconn but rather with some local Chinese government officials who wanted to promote their area as “innovative.”
There has also been a lot of coverage of robotics at Adidas, which is trying to use automation to improve the labor productivity of shoe-making to the point that it can be done economically in high-wage countries such as Germany. This article on Adidas also cites the Foxconn “60,000 jobs” assertion.
One key pair of numbers is missing from the stories I’ve seen on the Adidas project: the ratio of human workers to shoes produced, with and without the addition of the robotics. You can’t really judge the labor-reducing impact of the project without these numbers. In this Financial Times article, Adidas is quoted as saying, entirely reasonably, that they will need to get further into production with their new factory before developing meaningful productivity numbers. The article also cites Boston Consulting Group as estimating that by “2025 advanced robots will boost productivity by as much as 30 per cent in many industries.” Thirty percent is a very significant number, but it’s a long, long way from a productivity increase that would imply that factory jobs don’t matter, or that we’re going to inevitably have a very large permanently-unemployed population.
There are a lot of very significant innovations taking place in robotics and AI, but the hype level is getting a little out of hand. And it’s important to remember that automation is not a new phenomenon. For example, a CNC (computer numerically controlled) machine tool is a robot, albeit it might not look like the popular conception of one, and these machines, together with their predecessor NC (numerically controlled) machines, have been common in industry since the 1970s. One thing that articles and blog posts on the topic of robotics/AI/jobs could benefit from is a little historical perspective: do today’s innovations really represent a sharp break upwards in labor productivity, or are they more of a continuation of a long-term trend? And how, if it all, is the effect of these technologies appearing in the productivity statistics?
Automated systems need to be supervised by humans, and not just any humans, as Stanislav Petrov’s story makes clear. Individuals and bureaucracies that themselves behave in a totally robotic fashion cannot be adequate supervisors of the automation. See also my post Blood on the tracks for an additional example.
Posted by Trent Telenko on 10th June 2016 (All posts by Trent Telenko)
It is amazing the things you find out while writing a book review. In this case, a review of Phillips Payson O’Brien’s How the War Was Won: Air-Sea Power and Allied Victory in World War II. The book is thoroughly revisionist in that it posits that there were no real decisive land battles in WW2. The human and material attrition in those “decisive battles” was so small relative to major combatants’ production rates that losses from them were easily replaced until Anglo-American air-sea superiority — starting in the latter half of 1944 — strangled Germany and Japan. Coming from the conservative side of the historical ledger, I had a lot of objections to O’Brien’s book starting with some really horrid mistakes on WW2 airpower in the Pacific.
However, my independent research on General MacArthur’s Section 22 radar hunters in the Philippines proved one of the corollaries of O’Brien’s thesis — Namely that the Imperial Japanese were a fell WW2 high tech foe, punching in a weight class above the Soviet Union — was fully validated with a digitized microfilm from the Air Force Historical Research Agency (AFHRA) at Maxwell AFB, Alabama detailing the size, scope and effectiveness of the radar network Imperial Japan deployed in the Philippines.
The microfilm reel A7237 photoshop below is a combination of three scanned microfilm images of an early December 1944 radar coverage map of the Philippines. It shows 32 separate Imperial Japanese Military radar sites that usually had a pair of Japanese radars each (at lease 64 radars total), based upon the Japanese deployment patterns found and documented in Section 22 “Current statements” from January thru March 1945 elsewhere in the same reel.
This is a early December 1944 Japanese radar coverage map made by Section 22, GHQ, South West Pacific Area. It was part of the Section 22 monthly report series.
This Section 22 created map — taken from dozens of 5th Air Force and US Navy aerial electronic reconnaissance missions — showed Japanese radar coverage at various altitudes and was used by Admiral Halsey’s carrier fleet (See route E – F on the North Eastern Luzon area of the map) to strike both Clark Field and Manila Harbor, as well as by all American & Australian military services to avoid Japanese radar coverage to strike the final Japanese military reinforcement convoys, “Operation TA”, of the Leyte campaign. Read the rest of this entry »
Over at The Lexicans, Bill Brandt posted an item about an 8-part TV series titled ‘American Genius’…it is about a selection of inventors and entrepreneurs who have had a major impact on technology, society, and history. It sounded worthwhile and I’ve watched about half of the episodes–thanks, Bill!…definitely worth watching, but OTOH I think there are a few things in the series that should have been covered a little differently.
Edison vs Tesla is about the AC-vs-DC power wars, and correctly reports on the sleazy fearmongering tactics that Edison used in his unavailing attempt to maintain DC’s dominance. The show referred to George Westinghouse, who was Tesla’s sponsor in this battle, as “sort of a railroad baron,” completely ignoring the fact that Westinghouse was himself a major American inventor. Most people would think of a ‘railroad baron’ as someone who owns or manages railroads, not someone who invented the air brake.
Farnsworth vs Sarnoff is about the battle to dominate the emerging television industry. It was presented as a David-versus-Goliath story–though Goliath was in this case named David (Sarnoff)–individual inventor versus ruthless tycoon. Sarnoff was indeed ruthless, indeed could be fairly referred to as a prototypical crony capitalist…but it would have been interesting to point out that he wasn’t always a Goliath, wasn’t born to that position, but had in fact come to this country as an impoverished Russian Jewish immigrant and had encountered severe and career-threatening anti-Semitism on his path to Goliath-dom.
Space Race is focused on two individuals, the German/American Wernher von Braun and the Soviet rocket designer Sergei Korolev. Korolev was played by an actor who looked a little too young for the role at the subject time period: more importantly, it should have been mentioned that Korolev had been arrested and sent to the Gulag, where he lost most of his teeth due to the brutal labor-camp conditions. There were psychological scars as well–Boris Chertok , who worked closely with Korolev for years, said that there was only one single time that he saw the man really happy. In a series focused primarily on the leading characters and their conflicts rather than on technical details, these things deserved to be covered.
The program refers to a successful Soviet test in 1957 of a missile with intercontinental range, shortly before the launch of Sputnik. Actually, the test was a failure because the warhead disintegrated on reentry…and reentry, while a critical factor for ICBMs, is not important at all for one-way satellite launches. The American belief that Sputnik meant all of our cities were vulnerable to Soviet missiles was a little premature–not much.
I thought Wernher von Braun got off too easily in this program. The show did mention that the V-2 missile was assembled by slave labor in an underground factory adjacent to a concentration camp: the truly horrific nature of V-2 manufacturing (this was possibly the only weapons system ever made that killed more people in its making than in its employment) could have gotten more emphasis, and the evidence is that von Braun was fully aware of what was going on in this place.
I’m also not convinced that von Braun was as absolutely critical to US missile and space programs as the show implies. The program to build the Atlas missile, which was developed in roughly the same time period as Korolev’s R-7, was directed by USAF General Bernard Schriever, with technology expertise provided largely by the newly-formed Ramo-Wooldgridge Corporation and by Convair. I see no reason why this team could not also have conducted a Moon program, had they been so chartered.
The show does point out that von Braun, in addition to his technical and management contributions, played an important role in popularizing the ideas of rocketry and space travel…I had been unaware of his work with Disney to this end. So, in addition to being a genuine rocket scientist (and, arguably, a war criminal in at least a moral sense), von Braun was also one of the great PR men of the century.
Again, with the omissions and missed opportunities, the series is still very much worth watching.
In broken-windows policing the cops go after guys who jump subway turnstiles and commit other minor crimes. This is because the policing of low-level crimes tends to lead to reductions in serious crimes. Not only are minor criminals disproportionately responsible for felonies as compared to the general population, the fact that the police are seen not to ignore the small stuff creates a virtuous cycle by deterring other crimes and increasing the public’s confidence in civic authority.
I thought of this issue when I noticed that a sophisticated Java program that I use on my PC has serious bugs that are never corrected. For example, opening an Excel tie-in in the Java program kills all of the open Excel processes on my PC. I’ve complained several times but nothing gets fixed. Meanwhile there are simple apps on my phone that get updated frequently so that annoying little problems disappear over time. The fancy Java software has many more features but which software would I rather use?
Another Chicagoboy adds: The problem is that many companies view software updates as a cost rather than a feature. Software upgrades in response to customer complaints should be a trumpeted feature, because they are a way of convincingly communicating that the company shares its customers’ values about what matters, and therefore that it’s safe for the customers to invest their time in the company’s products as opposed to competing products.
It’s steps like this that move the space program forward. Notice this wasn’t done by NASA or ULA or the ESA. It was done by a private company that didn’t exist 15 years ago. 37 minutes, including the launch, recovery of the 1st stage, and deployment of the Dragon capsule.
BTW, very cool to me that Spacex did not require the help of a traditional media company for any of this. And it’s actually much better than anything they typically produce. In addition, the people in this video are in the Hawthorne, California, SpaceX facility where these rockets are designed and produced. They designed and built this rocket. And they’re watching it perform almost real time. How amazing is that?
They are mostly Sanders supporters. And they feel oppressed by the industry that they are in, and especially by the VCs who fund the companies where they work. Here’s the complaint of a 26-year-old software engineer:
“They sell you a dream at startups – the ping-pong, the perks – so they can pull 80 hours out of you. But in reality the venture capitalists control all the capital, all the labor, and all the decisions, so yeah, it feels great protesting one.”
“Tech workers are workers, no matter how much money they make,” said another guy, this one a PhD student at Berkeley.
Now, one’s first instinct when reading this story–at least my first instinct–is to feel contempt for these whiners. Most of them are far better off financially than the average American, even after adjusting for the extremely high costs of living in the Bay area. And no one forced any of them to work at startups, where the pressures are well-known to be extreme. They could have chosen IT jobs at banks or retailers or manufacturing companies or government agencies in any of a considerable number of cities.
Looked at from a broader perspective, though, the story reminded me of something Peter Drucker wrote almost 50 years ago:
I’ve previously written about the failure of the “Advanced Automation System,” an FAA/IBM effort to create a new-generation system for air traffic control: the story of a software failure. (The post excerpts the thoughts of Robert Britcher, who was deeply involved in the effort and is an excellent writer–very much worth reading.) The AAS project has been called “the greatest debacle in the history of organized work”–there are a lot of contenders for that honor, though, and here’s another one…
I have been considering “disruption”, including what is hype and what is real. Here is one on the cab industry where it occurred, in the electric and gas utility industry which has proven resilient in its current business model, and retail which is in the process of being disrupted.
My theory under these posts is that increasing supply (broadly defined) has been the key to whether or not “disruption” is truly real or not occurring. I don’t know if it will play out that way or not in the end but this is a starting point.
I have been interested in the airline industry for decades… in high school for my statistics class I built a model which correlated the profits of United Airlines with the price of oil. As an auditor and consultant I spent hours every week on a plane crossing the country serving utilities. And ever since I have traveled at least ten times a year for business or pleasure. So perhaps I would not consider myself an expert on the airline industry but certainly an interested observer.
The airline industry famously de-regulated in 1978. From 1978 to 2010 the airline industry added myriad new entrants and saw them fail along with much of the old guard. Wikipedia summarized this era here. In recent years, through bankruptcy and mergers, the US airline industry consolidated into four major carriers – American, United, Delta and Southwest. These four carriers control the vast majority of gates at major cities and effectively operate as an oligopoly. Now these four carriers are in rude health, as you can see in the stock chart below. Their stock prices have increased between 135% to 355% over the last 5 years. As an investor I bought Southwest after 9/11 and held on to it for years as the price languished; unfortunately I exited the stock before they became today’s oligopoly.
Another contributor to these gains is the collapse in oil prices. During the “peak oil” era, the airlines profits were strangled by the high cost of fuel – today they benefit immensely from today’s commodity price crash. This article describes how lower fuel costs saved them $4.3B in the third quarter 2015 alone and these lower costs have generally not come through to end users as price decreases – the airlines have banked the money or used them for dividends and capital improvements.
D-Wave Systems, located in British Columbia, is a builder of commercial quantum computers. It stores bits as magnetic directions in one of three states: clockwise, counterclockwise, and both directions simultaneously. The math and physics are far beyond me, but they claim to solve certain sets of optimization problems up to 100,000,000 times faster than classical computers. Customers for their computers, which cost $10 million apiece, include Lockheed Martin, an unnamed intelligence agency (NSA?), Google, JPL and NASA Ames Research.
Applications appear to be computationally intensive problems with lots of variables, and the solution involves a process called quantum annealing, where an optimal approach is found by exploring millions of solutions simultaneously to find the most efficient solution path. I’m reminded of a discussion on the famous double slit experiment, a classic physics experiment that demonstrates photons displaying behaviors of both waves and particles, known as wave-particle duality. Most interesting is that quantum probabilistic behaviors are also observed, in that the experiment functions differently when the particle paths are observed and when they are not. When the photons in the experiment are observed, the probability function collapses and the photons behave like a particles. If they are not observed, the photons take many paths through the slits and create a dispersed pattern on the target. That behavior has been described as “spooky”, because the particles seem to know when they are being observed. Weird, I know. It’s been said that anyone who claims to understand quantum mechanics is lying. But that doesn’t mean we can’t describe its behavior. Richard Feynman explained that at the quantum level, every possible path a photon can take is considered, and the path chosen is a probability function, like a bell curve. As photons are emitted from a source, the most likely path is taken most often, but some photons will take slightly less probable paths, still other even less probable paths, and so on. Quantum annealing seems to be a form of that, where many paths are simultaneously considered until a most probable path emerges, then it is chosen.
Posted by Mrs. Davis on 27th February 2016 (All posts by Mrs. Davis)
The government is asking Apple to give it the password to Syed Rizwan Farook’s iPhone and iCloud account. Apple is refusing to do so based on its First Amendment rights. This seems to me to be a very weak argument. Just ask Judith Miller. And there really is very little difference. Apple will have to spend $100,000 to comply and all Judith Miller needed to do was name a source. But Apple’s case involves a national security threat to each and every American whereas Judith Miller’s involved only an implausible threat to Valerie Plame who chose to garner all kinds of media attention thereafter. If there were a safe deposit box the government wanted opened, it would go to a court and get an order for the bank to drill the locks out so that the box could be removed. The bank would comply. Apple will lose.
And if Apple does not lose, the matter will go, as its pleading requests and as it may, even if it loses, now that Apple has made such a ruckus, from the fairly rational precincts of the judiciary to the fully irrational floor of the Congress. Let’s suppose that before legislation is completed there is another domestic terror incident in the US and the terrorist used an Apple iPhone. What kind of legislation would Apple get after that? While not yet widely known, Apple has likely put a back door into every Chinese iPhone via a Chinese designed chip added to the iPhone at China’s insistence for phones sold in the PRC. If this is confirmed, Congress would go even more non-linear.
And what other things might the government do if Apple were to prevail? Well, in the extreme it could ask GCHQ or some other foreign service to crack the iPhone in general. No device is uncrackable. It could also signal the Chinese that it would not be aggressive in pursuing IP violations by China in the case of Apple products. Apple is refusing to cooperate with its government in the first responsibility of that government, to protect its citizens. There would be consequences. Is it really good legal advice to let your client take such risks?
Apple should have quietly cut a deal with the government that would offer its customers the maximum security and quietly complied with court orders until a truly offensive order was received. Barring that, Apple would have a far better argument saying that ordering it to break its phones would lower their value to customers, lowering Apple’s revenues, and lowering Apple’s market cap. This would constitute an uncompensated taking by the Federal government of enormous monetary value from every Apple shareholder for which Apple should be compensated.
With existing technology, you have no privacy. Products are in development that will allow retailers to know how long you look at an item on a shelf, if you pick it up, if you return it to the shelf, how long you look at it and if you buy it. And if you wear an iWatch or other wearable, it will know how much your pulse and bp increased at each step of engagement. If you use gmail, as almost everyone seems to, Google knows the content of every email you send and receive. Who is more likely to release or resell your email, Google or the FBI? The Silicon Valley forces lining up against the government are the most probable threat to what you think is your privacy. It’s been almost 20 years since Scott McNealy said “You’ve got no privacy. Get over it.”
Apple will be made out to be protecting the ability of terrorists to communicate in secret. We are at war with these terrorists. They will kill any of us where ever they can. Article III, section 3 of the Constitution states,”Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort.” That sounds a lot like what Apple is seeking to do under protection of the first amendment’s emanations and penumbras.
Tim Cook is engaging in the same kind of magical thinking that has dominated the boomer elite and led to so many tragedies for the last 24 years. Losing wars has consequences.
Posted by Michael Hiteshew on 24th February 2016 (All posts by Michael Hiteshew)
What if someone were to apply the computer-controlled logistics system of an Amazon.com type business with robotic manufacturing? At Amazon, parts are stocked and retrieved robotically, inventories are updated, parts ordered, payments made, payments received, all with a minimum of human intervention. Humans manage the system, the system does the grunt work. Everything that can be automated is.
Combine that with robotic assembly, robotic inspection, robotic test, robotic packaging and shipping, and it seems one could easily compete with China for manufacturing a product like an iPhone. If something seems obvious yet does not occur, then one has not accounted for some key thing.
From my perspective, the key engine of economic growth is manufacturing; taking raw or less valuable material, applying know-how and capability, and creating something with greater net worth than the sum of its raw material worth. It is the foundation of wealth creation. And wealth creation is the foundation of a healthy economy, a high standard of living, social stability and opportunity.
Are we so tangled up in taxes and EPA and OSHA regulations we simply cannot manufacture anything competitively in the United States any longer, even with robots? If so, what is the solution, realistically? Is it possible to reform the regulatory state or does it need to be discarded, starting fresh? Can the tax system be fixed or should it burned and rebuilt? What is required to get manufacturing back on track in the United States?
I have been considering “disruption”, including what is hype and what is real. Here is one on the cab industry where it occurred and in the electric and gas utility industry which has proven resilient in its current business model.
While “retail” is a nebulous category, it is one that touches virtually everyone in the USA. Let’s start with the definition of retail:
the sale of goods to the public in relatively small quantities for use or consumption rather than for resale.
My experience with retail has been that of a consumer, although I live in an area near Michigan Avenue which features a huge variety of stores of all types, from mass market to high end “showcase” stores. I also have a long history with e-commerce, having been involved in a variety of businesses helping them to go “online” and “digital” from the earliest days of the web. Since the primary threat to modern retail today is from e-commerce, this experience is relevant.
This chart above is from a recent Business Insider article on retail. The graph clearly shows how shopping is moving from the physical retailer to the online retailer, and it is being accelerated by the adoption of mobile technologies (which enable you to shop and research while on the move, not just when you are in front of your computer at a desk).
But I don’t have any confidence that the Fox panel would have been smarter if its members understood the issue better. The real problem was that they didn’t come down in principle on the side of privacy. They could have at least expressed regret, or been reluctant about siding with the FBI.
But they were slavering urgently for whatever measure the FBI demanded to get into Syed Farook’s iPhone – as if all our lives depended on giving law enforcement any privacy-busting capability it sees a need for.
Technology doesn’t change the fact that this perspective is the opposite of the perspective of the Fourth Amendment. If our highest priority should be opening the people’s lives up to law enforcement, in case there are terror links lurking in our coupon drawers, then we should throw the Fourth Amendment out and require the people to all give the police keys to our homes, so it will be less of a hassle for them to get in whenever they declare a need to.
Conservatives are supposed to be smarter than this. Let’s walk through it briefly to clarify why there is no need to bust the built-in security feature of the iPhone for the FBI’s general convenience.
The electric and gas utility industry is the “exact opposite” of the classic “disruption” thesis… although disruption and revolution have been promised many times over the years, they have failed to materialize. Let’s look at the characteristics of this industry and find the salient facts that either “enable” or “defeat” disruption.
I worked in the electric and gas utility industry throughout all of the 90’s. I traveled to over 100 public, private and municipally owned utilities (there aren’t that many left today because there have been many mergers in the industry space). Since then I have followed them through business publications and public sources of information.
The electric utility industry has 4 main components:
1. Generation – the generation of power through nuclear fuel, coal, natural gas, hydro or solar / renewable
2. Transmission – moving power via high voltage lines from where it is generated (remote) to the cities where people live
3. Distribution – the local city with overhead and underground wires and substations and physical trucks
4. Customer Service – who you call and how they dispatch crews and respond to incidents
The electric utility industry also is characterized by “real time” surges and the fact that power can’t be stored (yet) on a large scale; thus peaks occur on the hottest days or the coldest days and power is needed exactly at that moment at your particular location. These peaks can results in demand far higher than during a “typical” day.
The natural gas utility industry is conceptually similar to the electric energy industry with two main differences. Generation isn’t handled by them (exploration companies find natural gas and get it to their system through their own processes and methods) and natural gas is much less “peak sensitive” and can be stored near the point of demand and injected into the system.
Broadly speaking, there have been many attempts to “de-regulate” the electric and gas utility markets over the last THREE decades. Let’s start with natural gas.