"Restore(s) a little sanity into current political debate" - Kenneth Minogue, TLS "Projects a more expansive and optimistic future for Americans than (the analysis of) Huntington" - James R. Kurth, National Interest "One of (the) most important books I have read in recent years" - Lexington Green
Chicago Boyz is a member of the Amazon Associates, B&H Photo, Newsmax and other affiliate programs. Your Amazon and B&H purchases made after clicking those businesses' links, and your clicks on Newsmax links, help to support this blog.
Some Chicago Boyz advertisers may themselves be members of the Amazon Associates and/or other affiliate programs and benefit from any relevant purchases you make after you click on an Amazon or other link on their ad on Chicago Boyz or on their own web sites.
Chicago Boyz occasionally accepts direct paid advertising for goods or services that in the opinion of Chicago Boyz management would benefit the readers of this blog. Please direct any inquiries to
Chicago Boyz is a registered trademark of Chicago Boyz Media, LLC. All original content on the Chicago Boyz web site is copyright 2001-2017 by Chicago Boyz Media, LLC or the Chicago Boyz contributor who posted it. All rights reserved.
With all the current discussion about robotics and artificial intelligence, this seems like an anniversary worth noting: the ENIAC (Electronic Numerical Integrator and Calculator) was formally announced on February 15, 1946. (Or maybe it was February 14.) Originally developed to compute artillery trajectories, it was sufficiently general in its design that it could be programmed to address other kinds of problems as well. The programming was originally done with patch cords, but soon a sort of stored-programming approach was developed wherein the patch cord layout remained the same and the program was entered via an array of rotary switches.
After discussing his concerns about automation-driven job losses, he goes on to say “I feel even worse when I hear misleading statements about the source of the problem. Blaming China and NAFTA is a convenient deflection, but denial will only make the wrenching employment dislocation for millions all the more painful.”
I’ve seen this assertion–“offshoring doesn’t matter because Robots”–and it doesn’t make a whole lot of sense to me. It should be obvious that both factors play a role; there’s no need for a single-variable explanation. (Actually, offshoring-driven job losses and automation-driven job losses are somewhat less than additive in their effect, since automation generally makes US-based production more relatively attractive.)
What if we regarded code not as a high-stakes, sexy affair, but the equivalent of skilled work at a Chrysler plant? Among other things, it would change training for programming jobs—and who gets encouraged to pursue them. As my friend Anil Dash, a technology thinker and entrepreneur, notes, teachers and businesses would spend less time urging kids to do expensive four-year computer-science degrees and instead introduce more code at the vocational level in high school….Across the country, people are seizing this opportunity, particularly in states hit hardest by deindustrialization. In Kentucky, mining veteran Rusty Justice decided that code could replace coal. He cofounded Bit Source, a code shop that builds its workforce by retraining coal miners as programmers. Enthusiasm is sky high: Justice got 950 applications for his first 11 positions. Miners, it turns out, are accustomed to deep focus, team play, and working with complex engineering tech. “Coal miners are really technology workers who get dirty,” Justice says.
I’m reminded of two things that Peter Drucker said in his 1969 book The Age of Discontinuity. In attacking what he called ‘the diploma curtain’, he wrote “By denying opportunity to those without higher education, we are denying access to contribution and performance to a large number of people of superior ability, intelligence, and capacity to achieve.”
But also, Drucker wrote, in his discussion of the Knowledge Economy:
The knowledge worker of today…is not the successor to the ‘free professional’ of 1750 or 1900. He is the successor to the employee of yesterday, the manual worker, skilled or unskilled…This hidden conflict between the knowledge workers view of himself as a ‘professional’ and the social reality in which he is the upgraded and well-paid successor to the skilled worker of yesterday, underlies the disenchantment of so many highly educated young people with the jobs available to them…They expect to be ‘intellectuals.’ And the find that they are just ‘staff.’
Indeed, many jobs that have been thought of as ‘professional’ and ‘white collar’…programming, financial analysis, even engineering…are increasingly subject to many of the stresses traditionally associated with ‘blue collar’ jobs, such as layoffs and cyclical unemployment. As a higher % of the corporate cost structure becomes concentrated in jobs which are not direct labor, it is almost inevitable that these jobs will be hit increasingly when financial problems make themselves felt.
Drucker’s second point, which I think is very astute, is somewhat orthogonal to the coal-miners-becoming-coders piece, and probably deserves it own post for discussion. Regarding the question of non-college-educated people becoming programmers (of which there has long already been a fair amount), the degree to which succeeds is to some degree coupled with the whole question of h-1b visa policy, and trade policy in general as it relates to offshoring of services.
(Hearing in a town this size, by John Prine and Delores Keane, reminded me of this 2013 post–rerun here, with some edits and a special musical bonus added at the end.)
I’ve reviewed two books by German writer Hans Fallada: Little Man, What Now?, and Wolf Among Wolves (the links go to the reviews), both of which were excellent. I’ve also read his novel Every Man Dies Alone, which is centered on a couple who become anti-Nazi activists after their son Ottochen is killed in the war…it was inspired by, and is loosely based on, the true story of a real-life couple who distributed anti-Nazi postcards and were executed for it.
I thought this book was also excellent…the present post, though, is not a book review, but rather a development of some thoughts inspired by a particular passage in the story.
Trudel, who was Ottochen’s fiancee, is a sweet and intelligent girl who is strongly anti-Nazi..and unlike Ottochen’s parents, she became an activist prior to being struck by personal tragedy: she is a member of a resistance cell at the factory where she works. But she finds that she cannot stand the unending psychological strain of underground work–made even worse by the rigid and doctrinaire man (apparently a Communist) who is leader of the cell–and she drops out. Another member of the cell, who has long been in love with her, also finds that he is not built for such work, and drops out also.
After they marry and Trudel becomes pregnant, they decide to leave the politically hysterical environment of Berlin for a small town where–they believe–life will be freer and calmer.
Like many city dwellers, they’d had the mistaken belief that spying was only really bad in Berlin and that decency still prevailed in small towns. And like many city dwellers, they had made the painful discovery that recrimination, eavesdropping, and informing were ten times worse in small towns than in the big city. In a small town, everyone was fully exposed, you couldn’t ever disappear in the crowd. Personal circumstances were quickly ascertained, conversations with neighbors were practically unavoidable, and the way such conversations could be twisted was something they had already experienced in their own lives, to their chagrin.
Reading the above passage, I was struck by the thought that if we are now living in an “electronic village”…even a “global village,” as Marshall McLuhan put it several decades ago…then perhaps that also means we are facing some of the unpleasant characteristics that–as Fallada notes–can be a part of village life. And these characteristics aren’t something that appears only in eras of insane totalitarianism such as existed in Germany during the Nazi era. Peter Drucker, in Managing in the Next Society, wrote about the tension between liberty and community:
Rural society has been romanticized for millenia, especially in the West, where rural communities have usually been portrayed as idylic. However, the community in rural society is actually both compulsory and coercive…And that explains why, for millenia, the dream of rural people was to escape into the city. Stadluft macht frei (city air frees) says an old German proverb dating back to the eleventh or twelfth century.
I recently traded in my old Acura MDX for a new one. What a long, long way we have come in the 7 years since I purchased a new vehicle. I now have an air conditioned seat, something I am looking forward to using this Spring and Summer. I also have a heated steering wheel now, which is great during Winter. Quite the creature comfort.
It also has a feature called Auto-Idle Stop that you can enable and disable that shuts the car off at a stop to save gas. The Acura dealer says that is will save a mile a gallon. At first I didn’t like it, but now I am used to it. I remembered it from when I was in a Prius cab once. When you take your foot off the brake, the car fires up and off you go. While you are stopped, all of the climate control and audio/whatever else you have on is still functional. It automatically turns back on after around a minute sitting there if you haven’t moved. I have no clue how this actually saves you gas but if they say it does, I guess they can’t really lie about it.
Outside of all of the comfort things, the new vehicle is a technological powerhouse. I have had it for almost a month now and am still figuring out all of the features and tech stuff. It has 16 gig of memory to store music onboard. I don’t use that much since I love my XM, but there it is if you want it.
Of the greatest interest to me are the next steps auto manufacturers have made to get everyone used to the idea of the inevitable autonomous vehicle. Three things work in concert on my vehicle. They are Adaptive Cruise Control (ACC), Lane Keeping Assist System (LKAS) and Lane Departure Warning (LDW). At first I turned all of this stuff off, but decided to one day read the manual (I know) to understand how it all works. It is interesting to say the least.
ACC is basically “smart” cruise control. You set your cruise and it will keep the speed, but will also compensate for cars in front of you. You can set the distance that you prefer between your car and the car in front of you (there are four distances to choose from). In the city, I choose the closest distance so as not to clog traffic. The car will actually go all the way down to zero, braking at a light, and will start moving again when the car in front moves forward. There is a bit of a delay when you re-start, so you may look like you have no idea what you are doing, but to heck with everyone else, you don’t have to accelerate or brake and they do. Oh yes, the Auto-Idle Stop feature works with this as well, but you have to hit the accelerator to resume again if you are Auto-Idle Stopped with the ACC in charge.
LDW is, from what I have figured out, just a warning system. It wiggles the steering wheel and shows a display when it feels you are out of the lane.
LKAS is where the rubber really hits the road. When you enable this along with the ACC, the car literally drives itself. LKAS keeps you centered in the lane at whatever speed you are going. I have taken my hands off the wheel, but there are apparently sensors in the wheel because after a few seconds, the car says “you have to drive” and shuts down the auto systems. So just a light pressure on the wheel is all you need and you can let the car do the work. Sometimes the delay takes a bit and it would seem to the car behind you that you are drunk driving since you are weaving back and forth a bit in the lane. This typically happens when you are on a curved road. It isn’t perfect, but when the road is straight, it works very well.
The cameras for all of this are only as good as the ROAD MARKINGS. We had a snow storm recently and my car was caked with snow and ice and the car just said on the display “cameras blocked” and you are on your own. In addition, I live in rural Wisconsin, just outside of Madison. In the city, there are much better lane markings. In the country, the roads have NONE. No smart driving for you in the country, although the ACC always works wherever you are as long as the camera isn’t blocked by snow. Even in the city, the lane markings deviate and/or are in bad shape in areas, and the car will beep and tell you that “tough stuff, you have to drive”, we can’t see the lane. This means that you have to pay attention because at times, you can see the lane markings, but the cameras can’t. There is a part of the display that lets you know if the camera can see the lane markings. I haven’t been on the interstate with it yet, but will soon and look forward to seeing what the car can do in that venue. I assume it will work great.
All in all, when I figure out everything, this new vehicle will make my hour plus a day in the car a much more pleasant experience. Without proper lane markings, however, or unless and until we have lightning speeds with GPS, I don’t see fully autonomous vehicles coming for a bit. Which gets me to thinking I should probably look into investing in companies that manufacture lane marking equipment and paint, but that is certainly grist for another post.
Chinese Premier Li Keqiang has lamented China’s inability to “make ballpoint pens with a smooth writing function.” After five years of research, a state-owned steel company now says it can.
WSJ notes that 80% of the world’s ballpoint pens are made in China…but that thus far, China has not been making all of the pen’s components. Specifically:
The tip of a high-quality ballpoint demands metal work involving high-precision machinery and very hard, ultrathin steel plates. So 90% of pens made in China have imported tips. China’s leaders want “self-sufficiency,” in pens as in semiconductors. Now they claim they’ll have it.
This little story is interesting from at least three angles.
First–as the WSJ story points out, China’s desire to control the entire ballpoint pen supply chain indicates that their leaders still value economic autarky, and that Chinese leadership denunciation of President Trump on grounds of his insufficient respect for free trade carry more than a whiff of hypocrisy.
Second–the ballpoint pen example makes the point that the apparent simplicity of a product does not necessarily reflect the complexity or lack thereof involved in manufacturing it. American economic commentators often fail to grasp this point when they assert that America’s future must lie in producing “advanced high-technology products.”
Third–the example should also clarify the point that the highest value in a product supply chain does not necessarily lie in the assembly of the final product. The final product assembly is usually the most visible part of the supply chain, but very often the creation of components that go into that chain involves more complexity and requires more skill than the final assembly process itself. It’s considerably more difficult to make integrated circuits, for example, than to assemble those chips onto circuit boards and to assemble the boards into a plastic or metal case.
Edward Porter Alexander, who was Lee’s artillery commander at Gettysburg, became a railroad president after the war. His experiences in running a major transportation system probably had something to do with the evolution of his thoughts regarding state’s rights:
Well that (state’s rights) was the issue of the war; & as we were defeated that right was surrendered & a limit put on state sovereignty. And the South is now entirely satisfied with that result. And the reason of it is very simple. State sovereignty was doubtless a wise political instution for the condition of this vast country in the last century. But the railroad, and the steamboat & the telegraph began to transform things early in this century & have gradually made what may almost be called a new planet of it… Our political institutions have had to change… Briefly we had the right to fight, but our fight was against what might be called a Darwinian development – or an adaptation to changed & changing conditions – so we need not greatly regret defeat.
I think a lot of the belief in unlimited globalization is implicitly driven by an extension of Alexander’s argument, with the jet plane, the container ship, and the Internet taking the place of the railroad, steamboat, and telegraph.
How far does this extension make sense? If the ability of locomotives could pull trains across the United States in three days meant that full sovereignty for individual states was obsolete, does the ability of jet airplanes to carry passengers and freight anywhere in the world in less than one day similarly imply that full sovereignty for nations is obsolete?
I suspect that most people at this site will not agree with a transportation-based argument for the elimination of national sovereignty. So, what is valid and what is invalid about Alexander’s analysis, and what are the limits for the extension of its geographical scope? Discuss.
In a recent post I discussed the spate of updates that have occurred in my Apple products including a new iOS for my work and home phone, a new iOS for my iPad, a new iOS for my Apple Watch, and a new operating system for my Mac.
Let’s start with the Apple Watch. The Apple Watch is an evolutionary product and the jury is out on whether or not it will be a giant part (“move the needle”) of the Apple portfolio. Personally, I find the Apple Watch to be very useful because I can get notifications when big events occur (for instance, I was the first to say “Prince is dead” in a big meeting) or just to be reminded when texts happen and I don’t have my phone on. It also is good for sports score notifications and tracking workouts. Finally, you can also always know if someone is calling you even if the ringer on your phone is off, and you can answer it “Dick Tracy Style” on your wrist (if you want to annoy everyone around you). Here is my review of the Apple Watch from 2015 when I bought it.
Apple Watch iOS 3.0 is OK. The watch seems a bit faster. They made it easier to utilize some popular apps like the workout app and incorporated some other improvements here and there. I can’t take advantage of all the iOS 3.0 features because my older Apple watch doesn’t have some of the features like the built in GPS that comes with the new watch.
Mac OS Sierra
There has been a lot of noise in the press about Apple not updating their core computers and even letting Microsoft steal their thunder with the new Surface tablet. However, Apple deserves immense credit for making their OS upgrades work effectively even on older model machines – for instance the Macbook that I am writing this blog post on is from 2011 (my friend Brian installed an SSD and more memory which I documented here).
The most important elements from my perspective are the continued integration of the Mac OS with the iPad and iPhone devices. With this upgrade I now can easily share a single photo stream (which will get its own post since it is so complicated), use Apple music easily across devices, and use key apps like messenger, notes, ibooks, contacts and Facetime (mostly) seamlessly. Siri also works on the Mac now which is fine for most people but I don’t use Siri much so it is irrelevant to me.
Gone too is the iconic firm’s appliances business, which was sold to Chinese firm Haier. This is really a progression of the economic cycle. While folks like President-elect Donald Trump and financial provocateur Peter Schiff lament that Americans just don’t make stuff anymore, at a certain point, advanced economies should outsource physical work to less-advanced countries. It’s not so much a matter of ability as it is financial efficiency.
Does this writer believe that GE should also divest the jet engine business, the power generation business, and the transportation (locomotive) business? All of these businesses make physical things, and make substantial amounts of those physical things in the US.
The idea that manufacturing is devoid of intellectual content and hence unworthy of advanced economies is fallacious and has done serious harm–see my post Faux Manufacturing Nostalgia. Happily, this attitude has turned around substantially since I wrote the linked post..to the point that manufacturing is being practically over-romanticized…but islands of the “who needs it?” view still exist.
GE’s reasoning for divesting Appliance seems to have been centered on a desire to focus the company on business-to-business markets rather than consumer markets and, and also, I think, on a perception that there was not sufficient room in the appliance world for product differentiation and a technology edge. “Technology edge,” rightly understood, includes the complexity/difficulty of manufacturing something, not just the intellectual property embedded in the product itself. It certainly did not reflect any conclusion that manufacturing is inherently a low-value function.
It would be silly to argue that a computer programmer in a bank is a “knowledge worker” and a programmer in manufacturing is not. It would be equally silly to argue that a bank branch manager is inherently performing a more highly-skilled job than a shift supervisor in a factory, or that a first-level customer service rep for Amazon is performing a more advanced kind of work than an assembly line worker, or that an operations research expert doing inventory studies for a manufacturing firm is less of a knowledge worker than his equivalent doing inventory studies for Target. But this is implicitly the argument that many of the ‘we don’t need manufacturing here’ crew have been making.
This dismissive attitude toward a vast and complex industry which supports millions of people represents one more example of the constellation of attitudes against which many people rebelled when choosing to vote for Donald Trump.
In my previous post of this series, I remarked that most discussion of the employment effects of robotics/artificial intelligence/etc seems to be lacking in historical perspective…quite a few people seem to believe that the replacement of human labor by machinery is a new thing.
This post will attempt to provide some historical perspective on today’s automation technologies by sketching out some of the past innovations in the mechanization of work, focusing on “robots,” broadly-defined…ie, on technologies which to some degree involve the replacement or augmentation of human mind/eye/hand, rather than those that are primarily concerned with the replacement of human and animal muscular energy…and will discuss some of the political debate that took place on mechanization & jobs in the 1920s through 1940s.
Throughout most of history, the production of yarn for cloth was an extremely labor-intensive process, done with a device called a distaff, almost always employed by women, and requiring many hours per day to generate a little bit of product. (There even exists a medieval miniature of a woman spinning with the distaff while having sex…whether this is a comment on the burdensomeness of the yarn-making process, or a slam at the love-making skills of medieval men, I’m not sure—-probably both.) Eventually, probably around 1400-1500 in most places in Europe, the spinning wheel came into use, improving the productivity of yarn-making by a factor estimated from 3:1 to as much as ten or more to one.
Gutenberg’s printing press was invented somewhere around 1440. I haven’t seen any estimates of its effect on labor productivity, compared with the then-prevailing method of hand copying of manuscripts, but surely it was at least 1000 to 1 or more.
The era from 1700-1850 was marked by tremendous increases in the productivity of the textile trades. The flying shuttle and other advances greatly improved the weaving process; this created a bottleneck in the supply of yarn, which was partly addressed by the invention of the Spinning Jenny–a foot-powered device that could improve the yarn production of one person by 5:1 or better. Power spinning and power looms yielded considerable additional productivity improvements.
An especially interesting device was the Jacquard Loom (1802), which used punched cards to direct the weaving of patterned fabrics. In its initial incarnation, the Jacquard was a hand loom: its productivity did not come from the application of mechanical power but rather from the automation of the complex thread-selection operations previously carried out by a “Draw Boy.”
Turning now to woodworking: in 1818, Blanchard’s Copying Lathe automated the production of complex shape–a prototype was automatically traced and copied. It was originally intended for making gunstocks, but also served in producing lasts for shoemakers, and I believe also chair and table legs.
Another major advancement in the clothing field was the sewing machine. French inventory Barthelemy Thimonnier invented a machine in 1830, but was driven out of the country by enraged tailors and political instability. The first commercially-successful machines were invented/marketed by Americans Walter Hunt, Elias Howe, and Isaac Singer, and were in common use by the 1850s.
By the late Victorian period the sewing machine had been hailed as the most useful invention of the century releasing women from the drudgery of endless hours of sewing by hand. Factories sprung up in almost every country in the world to feed the insatiable demand for the sewing machine. Germany had over 300 factories some working 24 hours a day producing countless numbers of sewing machines.
The beginnings of data communications could be seen in gold ticker and stock ticker systems created by Edison and others (circa 1870) , which relayed prices almost instantaneously and eliminated the jobs of the messenger boys who had previously been the distribution channel for this information. Practical calculating machines also appeared in the 1870s. But the big step forward in mechanized calculation was Hollerith’s punched card system (quite likely inspired in part by the Jacquard), introduced in 1890 and used for the tabulation of that year’s census. These systems were quickly adopted for accounting and record keeping purposes in a whole range of industries and government functions.
Professor Amy Sue Bix, in her book Inventing Ourselves out of Jobs?, describes the fear of technological unemployment as silent movies were replaced by the ‘talkies’. “Through the early 1920s…local theaters had employed live musicians to provide accompaniment for silent pictures. Small houses featured only a pianist or violinist, but glamorous ‘movie places’ engaged full orchestras.” All these jobs were threatened when Warner Brothers introduced its Vitaphone technology, with prerecorded disks synchronized to projectors. “Unlike other big studios, Warner did not operate its own theater chains and so had to convince local owners to screen their productions. Theater managers would be eager to show sound movies, Harry Warner hoped, since they could save the expense of hiring musicians.”
The American Federation of Musicians mounted a major PR campaign in an attempt to convince the public that ‘living music’ was better than ‘canned sound.’ A Music Defense League was established, with membership reaching 3 million…but the ‘talkies’ remained popular, and the AFM had to admit defeat. A lot of musicians did lose their jobs.
Here’s a new factory for making automobile frames, specifically designed to minimize the need for human labor. The CEO of the company that built it actually said, “We set out to build automobile frames without people.”
At the start of the process, rough steel plates are inspected by electronic sensors, automatically pushing aside any that deviate from tolerances. Conveyors take the plates through punching, pressing, assembling, and nailing machines, as well as a machine that can insert 60 rivets simultaneously in each frame. A set of finishing machines then rinse, dry, spray-paint, and cool the frames. Aside from a few men moving frames between conveyor belts, the floor routine of the plant requires almost no hand labor.
And today’s robotics and artificial-intelligence advances go far beyond automating routine manufacturing labor and take over the kind of cognitive functions once thought to be exclusive to human beings. Here, for example, is a new AI-based system that displaces much of the thought-work which has been required of the people operating railway switch and signal installations:
The NX control machine is in effect the “brain” of the system. It automatically selects the best optional route if the preferred route is occupied. It will allow no conflicting routes to be set up. It eliminates individual lever control of each switch and signal.
Pretty scary from the standpoint of maintaining anything like full employment, don’t you think?
As smartphones become more powerful and more connected there are subtle phenomenon that are very powerful that can go by unnoticed. For years I either walked to work or took public transit but now in the Pacific Northwest I commute by car. Since the surroundings are new I pay much more attention to what is going on than I used to in Chicago.
In Chicago, there aren’t a lot of opportunities to optimize your travel if you are driving alongside major roads such as I290 or the Dan Ryan. Unless you really, really know what you are doing it is not recommended to get off the highway in many Chicago neighborhoods and just to follow your mobile navigation blindly. Thus in Chicago when I was in bad traffic it pretty much looked like this – a speed of zero and stuck crawling ahead.
The first generation of car navigation tools told you how to get somewhere with the most efficient route, taking standard traffic into account. The new generation of navigation apps, however, have real-time information and continuously re-adjust the “recommended” route based on traffic, accidents and construction. Read the rest of this entry »
Writing in today’s WSJ, Peggy Noonan says: “This year I am seeing something, especially among the young of politics and journalism. They have received most of what they know about political history through screens They’re college graduates…they’re bright and ambitious, but they have seen the movie and not read the book….They learned through sensation, not through books, which demand something deeper from your brain. Reading forces you to imagine, question, ponder, reflect…Watching a movie about the Cuban Missile Crisis shows you a drama. Reading about it shows you a dilemma.”
The article reminded me of Neal Stephenson’s book and of this post, which I originally ran in late 2007.
My post today is inspired by In the Beginning was the Command Line, by Neal Stephenson, a strange little book that will probably be found in the “computers” section of your local bookstore. While the book does deal with human interfaces to computer systems, its deeper subject is the impact of media and metaphors on thought processes and on work.
Stephenson contrasts the explicit word-based interface with the graphical or sensorial interface. The first (which I’ll call the textual interface) can be found in a basic UNIX system or in an old-style PC DOS system or timesharing terminal. The second (the sensorial interface) can be found in Windows and Mac systems and in their respective application programs.
As a very different example of a sensorial interface, Stephenson uses something he saw at Disney World–a hypothetical stone-by-stone reconstruction of a ruin in the jungles of India. It is supposed to have been built by a local rajah in the sixteenth century, but since fallen into disrepair.
The place looks more like what I have just described than any actual building you might find in India. All the stones in the broken walls are weathered as if monsoon rains had been trickling down them for centuries, the paint on the gorgeous murals is flaked and faded just so, and Bengal tigers loll among stumps of broken columns. Where modern repairs have been made to the ancient structure, they’ve been done, not as Disney’s engineers would do them, but as thrifty Indian janitors would–with hunks of bamboo and rust-spotted hunks of rebar.
In one place, you walk along a stone wall and view some panels of art that tell a story.
…a broad jagged crack runs across a panel or two, but the story is still readable: first, primordial chaos leads to a flourishing of many animal species. Next, we see the Tree of Life surrounded by diverse animals…an obvious allusion (or, in showbiz lingo, a tie-in) to the gigantic Tree of Life that dominates the center of Disney’s Animal Kingdom…But it’s rendered in historically correct style and could probably fool anyone who didn’t have a PhD in Indian art history.
The next panel shows a mustachioed H. sapiens chopping down the Tree of Life with a scimitar, and the animals fleeing every which way. The one after that shows the misguided human getting walloped by a tidal wave, part of a latter-day Deluge presumably brought on by his stupidity.
The final panel, then, portrays the Sapling of Life beginning to grow back, but now man has ditched the edged weapon and joined the other animals in standing around to adore and praise it.
Clearly, this exhibit communicates a specific worldview, and it strongly implies that this worldview is consistent with traditional Indian religion and culture. Most viewers will assume the connection without doing further research as to its correctness or lack thereof.
I’d observe that as a general matter, the sensorial interface is less open to challenge than the textual interface. It doesn’t argue–doesn’t present you with a chain of facts and logic that let you sit back and say, “Hey, wait a minute–I’m not so sure about that.” It just sucks you into its own point of view.
I started out as a Windows user and was actually a Windows programmer (using MS Access) for quite a long time. I resisted the siren call of Apple products and stuck with Windows for years and years, for work and for personal use.
Finally, I gave in and bought a MacBook Pro in 2011 which turned out to be a great purchase (and got rid of my Windows Desktop PC). I always had an iPhone for my personal cell phone and when I turned in my work Blackberry (a sad day at the time) for an iPhone, that meant that I had two iPhones. For a while I also used a Mac at work, although I ended up switching back to a Windows laptop because password resets, system upgrades and a lack of compatibility for applications built for Windows made it too much of a pain in the rear. Mac laptops still struggle in the corporate world.
Then over the years I of course bought an iPad and then upgraded that iPad, and an Apple Watch, which I really like (although the jury is mixed on that one). Here is an Apple Watch article and review that I wrote.
Thus I now have five (5) Apple products – a MacBook Pro, an iPad, an Apple Watch, and two iPhones. And now it is time for all the updates… iOS 10 is out now which means I need to update my iPad and both iPhones. Apple Watch OS 3 is also out and I am downloading that right now (downloading the operating system into the watch, from the iPhone, seems to take a long time). My MacBook Pro will get updated to the new Sierra OS when it comes out on Tuesday, September 20th.
There probably aren’t too many TV series centered around a CNC machine shop…but there’s at least one, and it’s called Titans of CNC. The producer and central figure, Titan Gilroy–yes, that’s his real name–grew up in rough circumstances, spent some time in prison, and eventually learned machine-tool operation and CNC programming. With these skills in hand, he built a pretty substantial business, Titan America, which is focused on precision machining, mainly producing components of products being made by larger companies.
The program is about the challenges involved in the operation of Titan America and a portrait of some of its employees and customers. It is also a passionate argument for the importance of manufacturing in America. Sponsors include Autodesk, IMCO Carbide Tools, Haas Automation and GoEngineer.
The series was made for a cable channel called MATV, which is owned by Lucas Oil Products and is targeted towards car people. It’s available on Amazon streaming, which is where I’ve been watching it.
Due to the fact that computing power continues to increase exponentially, devices that once were out of reach for the general population are now becoming mainstream. I wrote about Netatmo, a device that measures temperature, humidity and sound (indoor and outdoor) here. Due to the internet, these devices can also be connected together in order to see a real-time version of the country, without having to look at a weather forecast.
Recently I saw an article in an MIT journal about indoor air quality which described how cooking eggs aggravated the authors’ asthma and they were able to take specific actions because they were able to pinpoint the source of the spike in unclear air. The name of the company that created the monitor is called Speck and it was sold for approximately $200 so I thought that was a decent price point for me to join the air quality monitoring revolution. I am specifically most interested in INDOOR air quality but I will explain the broader context and then come back to the specific items I am reviewing (basically you can get official measurements of air quality in the US from public sources).
If you want to slay the mistaken talk about the end of human employment, hold a contest. Come up with labor demand boosting ideas that we do not engage in today because we either don’t have enough people or don’t have enough money to do it. Weigh jobs that don’t require much intelligence or education as more valuable than those requiring high education/intelligence. Within a year I predict enough entries to be submitted to put the entire world to work multiple times over.
It is a bit embarrassing to think about things we are too poor to do. This makes these jobs invisible to us today. By creating a contest and an artificial market for these ideas, they become visible and we turn from despair at the jobless future to wondering how we can become efficient enough to afford to do all these wonderful things.
Let’s prototype the contest here, among friends (and a few special adversaries and maybe even some enemies), and maybe we can roll it out later on a larger scale. The winner will receive a microscopic amount of fame, and also a virtual certificate, not suitable for framing.
What are the things that we collectively and individually can’t afford–but might be able to afford given higher levels of productivity and national income–that would meaningfully affect well-being and human satisfaction? Define “things” as broadly as you like. Consider both things that could become more affordable due to productivity improvements in a specific industry, and things whose creation might not by itself be meaningfully improvable from a productivity standpoint but which people could better afford given an upward trend in overall productivity and income.
Every day, there are articles and blog posts about how quickly robots are replacing jobs, particularly in manufacturing. These often include assertions along the lines of “robots are replacing human labor so rapidly and so completely that it doesn’t really matter whether the factories are in the US or somewhere else.” There are also many assertions that robotics and artificial intelligence will triumph so completely that we must accept that we will permanently have a huge unemployed population who will need to be paid a “basic income” of some sort from the government.
This May, there were breathless headlines about how Foxconn, which is Apple’s primary contract manufacturer, was replacing 60,000 workers with robots–indeed, in some tellings, had already replaced them. If you google “foxconn 60000 workers”, you will get about 130,000 hits.
But the story, however, is false; indeed, it did not even originate with Foxconn but rather with some local Chinese government officials who wanted to promote their area as “innovative.”
There has also been a lot of coverage of robotics at Adidas, which is trying to use automation to improve the labor productivity of shoe-making to the point that it can be done economically in high-wage countries such as Germany. This article on Adidas also cites the Foxconn “60,000 jobs” assertion.
One key pair of numbers is missing from the stories I’ve seen on the Adidas project: the ratio of human workers to shoes produced, with and without the addition of the robotics. You can’t really judge the labor-reducing impact of the project without these numbers. In this Financial Times article, Adidas is quoted as saying, entirely reasonably, that they will need to get further into production with their new factory before developing meaningful productivity numbers. The article also cites Boston Consulting Group as estimating that by “2025 advanced robots will boost productivity by as much as 30 per cent in many industries.” Thirty percent is a very significant number, but it’s a long, long way from a productivity increase that would imply that factory jobs don’t matter, or that we’re going to inevitably have a very large permanently-unemployed population.
There are a lot of very significant innovations taking place in robotics and AI, but the hype level is getting a little out of hand. And it’s important to remember that automation is not a new phenomenon. For example, a CNC (computer numerically controlled) machine tool is a robot, albeit it might not look like the popular conception of one, and these machines, together with their predecessor NC (numerically controlled) machines, have been common in industry since the 1970s. One thing that articles and blog posts on the topic of robotics/AI/jobs could benefit from is a little historical perspective: do today’s innovations really represent a sharp break upwards in labor productivity, or are they more of a continuation of a long-term trend? And how, if it all, is the effect of these technologies appearing in the productivity statistics?
Automated systems need to be supervised by humans, and not just any humans, as Stanislav Petrov’s story makes clear. Individuals and bureaucracies that themselves behave in a totally robotic fashion cannot be adequate supervisors of the automation. See also my post Blood on the tracks for an additional example.
Posted by Trent Telenko on 10th June 2016 (All posts by Trent Telenko)
It is amazing the things you find out while writing a book review. In this case, a review of Phillips Payson O’Brien’s How the War Was Won: Air-Sea Power and Allied Victory in World War II. The book is thoroughly revisionist in that it posits that there were no real decisive land battles in WW2. The human and material attrition in those “decisive battles” was so small relative to major combatants’ production rates that losses from them were easily replaced until Anglo-American air-sea superiority — starting in the latter half of 1944 — strangled Germany and Japan. Coming from the conservative side of the historical ledger, I had a lot of objections to O’Brien’s book starting with some really horrid mistakes on WW2 airpower in the Pacific.
However, my independent research on General MacArthur’s Section 22 radar hunters in the Philippines proved one of the corollaries of O’Brien’s thesis — Namely that the Imperial Japanese were a fell WW2 high tech foe, punching in a weight class above the Soviet Union — was fully validated with a digitized microfilm from the Air Force Historical Research Agency (AFHRA) at Maxwell AFB, Alabama detailing the size, scope and effectiveness of the radar network Imperial Japan deployed in the Philippines.
The microfilm reel A7237 photoshop below is a combination of three scanned microfilm images of an early December 1944 radar coverage map of the Philippines. It shows 32 separate Imperial Japanese Military radar sites that usually had a pair of Japanese radars each (at lease 64 radars total), based upon the Japanese deployment patterns found and documented in Section 22 “Current statements” from January thru March 1945 elsewhere in the same reel.
This is a early December 1944 Japanese radar coverage map made by Section 22, GHQ, South West Pacific Area. It was part of the Section 22 monthly report series.
This Section 22 created map — taken from dozens of 5th Air Force and US Navy aerial electronic reconnaissance missions — showed Japanese radar coverage at various altitudes and was used by Admiral Halsey’s carrier fleet (See route E – F on the North Eastern Luzon area of the map) to strike both Clark Field and Manila Harbor, as well as by all American & Australian military services to avoid Japanese radar coverage to strike the final Japanese military reinforcement convoys, “Operation TA”, of the Leyte campaign. Read the rest of this entry »
Over at The Lexicans, Bill Brandt posted an item about an 8-part TV series titled ‘American Genius’…it is about a selection of inventors and entrepreneurs who have had a major impact on technology, society, and history. It sounded worthwhile and I’ve watched about half of the episodes–thanks, Bill!…definitely worth watching, but OTOH I think there are a few things in the series that should have been covered a little differently.
Edison vs Tesla is about the AC-vs-DC power wars, and correctly reports on the sleazy fearmongering tactics that Edison used in his unavailing attempt to maintain DC’s dominance. The show referred to George Westinghouse, who was Tesla’s sponsor in this battle, as “sort of a railroad baron,” completely ignoring the fact that Westinghouse was himself a major American inventor. Most people would think of a ‘railroad baron’ as someone who owns or manages railroads, not someone who invented the air brake.
Farnsworth vs Sarnoff is about the battle to dominate the emerging television industry. It was presented as a David-versus-Goliath story–though Goliath was in this case named David (Sarnoff)–individual inventor versus ruthless tycoon. Sarnoff was indeed ruthless, indeed could be fairly referred to as a prototypical crony capitalist…but it would have been interesting to point out that he wasn’t always a Goliath, wasn’t born to that position, but had in fact come to this country as an impoverished Russian Jewish immigrant and had encountered severe and career-threatening anti-Semitism on his path to Goliath-dom.
Space Race is focused on two individuals, the German/American Wernher von Braun and the Soviet rocket designer Sergei Korolev. Korolev was played by an actor who looked a little too young for the role at the subject time period: more importantly, it should have been mentioned that Korolev had been arrested and sent to the Gulag, where he lost most of his teeth due to the brutal labor-camp conditions. There were psychological scars as well–Boris Chertok , who worked closely with Korolev for years, said that there was only one single time that he saw the man really happy. In a series focused primarily on the leading characters and their conflicts rather than on technical details, these things deserved to be covered.
The program refers to a successful Soviet test in 1957 of a missile with intercontinental range, shortly before the launch of Sputnik. Actually, the test was a failure because the warhead disintegrated on reentry…and reentry, while a critical factor for ICBMs, is not important at all for one-way satellite launches. The American belief that Sputnik meant all of our cities were vulnerable to Soviet missiles was a little premature–not much.
I thought Wernher von Braun got off too easily in this program. The show did mention that the V-2 missile was assembled by slave labor in an underground factory adjacent to a concentration camp: the truly horrific nature of V-2 manufacturing (this was possibly the only weapons system ever made that killed more people in its making than in its employment) could have gotten more emphasis, and the evidence is that von Braun was fully aware of what was going on in this place.
I’m also not convinced that von Braun was as absolutely critical to US missile and space programs as the show implies. The program to build the Atlas missile, which was developed in roughly the same time period as Korolev’s R-7, was directed by USAF General Bernard Schriever, with technology expertise provided largely by the newly-formed Ramo-Wooldgridge Corporation and by Convair. I see no reason why this team could not also have conducted a Moon program, had they been so chartered.
The show does point out that von Braun, in addition to his technical and management contributions, played an important role in popularizing the ideas of rocketry and space travel…I had been unaware of his work with Disney to this end. So, in addition to being a genuine rocket scientist (and, arguably, a war criminal in at least a moral sense), von Braun was also one of the great PR men of the century.
Again, with the omissions and missed opportunities, the series is still very much worth watching.
In broken-windows policing the cops go after guys who jump subway turnstiles and commit other minor crimes. This is because the policing of low-level crimes tends to lead to reductions in serious crimes. Not only are minor criminals disproportionately responsible for felonies as compared to the general population, the fact that the police are seen not to ignore the small stuff creates a virtuous cycle by deterring other crimes and increasing the public’s confidence in civic authority.
I thought of this issue when I noticed that a sophisticated Java program that I use on my PC has serious bugs that are never corrected. For example, opening an Excel tie-in in the Java program kills all of the open Excel processes on my PC. I’ve complained several times but nothing gets fixed. Meanwhile there are simple apps on my phone that get updated frequently so that annoying little problems disappear over time. The fancy Java software has many more features but which software would I rather use?
Another Chicagoboy adds: The problem is that many companies view software updates as a cost rather than a feature. Software upgrades in response to customer complaints should be a trumpeted feature, because they are a way of convincingly communicating that the company shares its customers’ values about what matters, and therefore that it’s safe for the customers to invest their time in the company’s products as opposed to competing products.
It’s steps like this that move the space program forward. Notice this wasn’t done by NASA or ULA or the ESA. It was done by a private company that didn’t exist 15 years ago. 37 minutes, including the launch, recovery of the 1st stage, and deployment of the Dragon capsule.
BTW, very cool to me that Spacex did not require the help of a traditional media company for any of this. And it’s actually much better than anything they typically produce. In addition, the people in this video are in the Hawthorne, California, SpaceX facility where these rockets are designed and produced. They designed and built this rocket. And they’re watching it perform almost real time. How amazing is that?
They are mostly Sanders supporters. And they feel oppressed by the industry that they are in, and especially by the VCs who fund the companies where they work. Here’s the complaint of a 26-year-old software engineer:
“They sell you a dream at startups – the ping-pong, the perks – so they can pull 80 hours out of you. But in reality the venture capitalists control all the capital, all the labor, and all the decisions, so yeah, it feels great protesting one.”
“Tech workers are workers, no matter how much money they make,” said another guy, this one a PhD student at Berkeley.
Now, one’s first instinct when reading this story–at least my first instinct–is to feel contempt for these whiners. Most of them are far better off financially than the average American, even after adjusting for the extremely high costs of living in the Bay area. And no one forced any of them to work at startups, where the pressures are well-known to be extreme. They could have chosen IT jobs at banks or retailers or manufacturing companies or government agencies in any of a considerable number of cities.
Looked at from a broader perspective, though, the story reminded me of something Peter Drucker wrote almost 50 years ago: