• Goto NETFUTURE main page
  •                                 NETFUTURE
    
                       Technology and Human Responsibility
    
    --------------------------------------------------------------------------
    Issue #40      Copyright 1997 O'Reilly & Associates       February 5, 1997
    --------------------------------------------------------------------------
                Editor:  Stephen L. Talbott (stevet@netfuture.org)
    
                         On the Web: http://netfuture.org
         You may redistribute this newsletter for noncommercial purposes.
    
    CONTENTS:
    *** Editor's Note
    *** Quotes and Provocations
          Chinese Cookies
          Looking up to Government
          Businesses That Grow Unprincipled
          David Kline on SLT on David Kline
    *** Is Technological Improvement What We Want?  (Part 2) (Steve Talbott)
          The Worm Was Already in the APL
    *** About this newsletter
    

    *** Editor's Note

    You may be interested to know that you are now reading a newsletter apparently judged not worthy of listing after review by the editors of The Scout Report (scout-report@lists.internic.net), one of the longest-standing and more serious of the Net surfing guides.

    On a more positive note--if I may indulge myself--the January issue of the library journal, Choice, names the "Outstanding Academic Books" of 1996. My own The Future Does Not Compute: Transcending the Machines in Our Midst had the good fortune to be listed among the six "Information and Computer Science" titles selected.

    Maybe I should abandon the newsletter and write another book.

    Goto table of contents


    *** Quotes and Provocations

    Chinese Cookies

    Remember Tienenman Square and the abortive coup in Moscow? We heard rumors of faxes and email messages, and were convinced that the Net, like an alien fungus, would invade every dictatorial regime, turning its nerve system and apparatus of repression into spineless jello.

    But something seems to have gone wrong. According to Robin Munro, director of Human Rights Watch/Asia, that organization now recommends to Chinese activists that they avoid email: "It's too easy for the government to monitor, and if they were to use encryption technologies they would be immediately suspect."

    This item comes from Gary Chapman's Los Angeles Times column of January 27. Chapman goes on:

    The real human rights issue in China, argues Munro, is whether or not foreign companies, particularly American computer and software firms, are selling technology to the Chinese government that will enable it to tighten its grip on society.
    Chinese cookies, it seems, may prove fatal. It's not the MSG; but it just may be the MS Inside.

    Looking up to Government

    According to Microsoft's Bill Gates and Intel's Andy Grove, the proper way for government to support technological development is as a role model. Speaking to the current World Economic Forum in Davos, Switzerland, the two chief executives said that government's use of technology internally and in schools sets a standard for the rest of us.

    In absolutely unrelated news, the IRS's $4 billion computer modernization program has now been officially pronounced a failure by the IRS itself. IRS computers "do not work in the real world" and the agency admits that it doesn't have a clue how to get there from here. But it also acknowledges that it is now "wholly dependent" on its computers. The agency is proposing that it contract out the processing of paper tax returns, while attempting piecemeal improvement of its computer systems.

    Schools are scheduled next for role modeling.

    (News from Toronto Globe and Mail and N.Y. Times, via Edupage.)

    Businesses That Grow Unprincipled

    I spoke in the last issue about the reversal of values that occurs so easily in contemporary businesses: the pursuit of worthwhile aims under the discipline of financial controls eventually becomes a pursuit of financial ends for which products, services, and employees are considered the means.

    It needs adding how easily this reversal occurs under the influence of the computer. Computational logic crunches the numbers and displays the results coherently and precisely in convenient spreadsheets. It is effortless on our part: just feed in the numbers and read off the "big picture." To wrestle with this picture, on the other hand, so as to make it serve independently chosen ends--this requires an overcoming of inertia, a transcendence of the natural logic of the tools around which the business has been organized, and a heroic inner effort.

    The tools can be made servants of our purposes. But it is never easy. What provokes this line of thought for me is an article on book superstores in Publishers Weekly (January 6). One of the concerns voiced about these stores is that their buyers "tend to use computers to weed out slow-selling titles no matter what their long-term potential." Or intrinsic worth, one might add.

    Yes, computers are superb at this. Why not use the tool if you have it? It's awfully hard to do something that doesn't make "financial sense" when the numbers are staring at you right there on the screen. One has to recollect oneself and ask, "Now what was it I got into this business for?" (It has, of course, been my central theme in NETFUTURE from the beginning that the computer requires this sort of self-recollection at every point of our contact with it.)

    There are small publishers and booksellers whose businesses are conviction-driven. Their efforts may be rooted, for example, in environmental concerns, or religion, or women's issues, or politics. Or they may do a more general business. But, in any case, what they sell matters. How, then, does such a business suffer the reversal of values I've been talking about?

    One path of reversal has become very clear to me: the company allows itself to grow where it shouldn't have, and therefore becomes dependent upon an unprincipled and unpredictable market, which it now must follow slavishly. I am not at all suggesting that the company should refuse to sell to those who do not share its concerns. But suppose that, in a particular year, an unusual success in selling to a wider market brings in especially high profits. If the company uses those profits to grow itself beyond what its own, more principled market can reliably support on a sustained basis, it has now built into its operations a requirement to repeat its previous success. Which is to say, it has to generate more hits in the larger, unprincipled market. Which is again to say that it must go wherever the fickle market goes, rather than serve its own, original purposes.

    The alternative would have been to use the unexpected windfall to secure and strengthen it's principled business--which probably is not normally a terribly profitable one! By keeping this business closely linked to a conviction-driven group of customers--those who buy the product because of the values it represents--the tendency toward unhealthy growth is naturally reined in.

    In sum, chase after whatever market happens to be available, growing your business willy-nilly as you you tap into the fads of the moment, and you will end up a servant of these fads. The market will whip you around, and eventually--however many times you have managed to call things correctly--you will misjudge the next wave. By then, the livelihood of thousands of employees may be at stake.

    Yes, this whirling-dervish economics is the fastest way to achieve what we call technical progress. It is also the driving essence of the high-tech industry today, which has pressed the reigning dynamic so far and based it upon such thorough automatisms, that asking "What worthwhile social ends is this or that product designed to serve?" is unavoidably to draw a blank stare. The question really does scarcely have a meaning in this context. And, anyway, the product represents technical progress, doesn't it?

    But--hard as it is for us to grasp this--technical progress is not the same thing as human welfare. Our welfare has more to do with the gifts we gain through the inner mastery of technology--including mastery of the technology's distorting influence upon our business purposes.

    Another article in that same issue of Publishers Weekly contains a lament by Charlie Winton of Publishers West Group:

    One of the things that makes it difficult now is the way the business is going. There's no real humanity in it now, not the same sort of pleasure in doing business as there used to be. I just don't want to hear one more bookseller talk about the shareholder values they perceive in their business now, instead of the love of books. The book business always used to have something of a magical quality; that was a critical element for all of us involved in it, and that's what is missing now.
    We can probably get by in the end without the magical quality. And even without the romanticization of the past. But not without some principles.

    David Kline on SLT on David Kline

    Anyone who makes it part of his business to puncture the various sillinesses abroad in Netland puts himself in a dangerous position. One danger is arrogance, and another is playing too fast and loose with the names and reputations of other people. Some targets, of course, ask for it, and will be disappointed if their grandstanding doesn't draw attention, even if that means coming under heavy fire. Others do their best to speak the truth, and ask only that they be judged according to their words and actions.

    David Kline, I think, is of the latter sort. An article of his was the object of some derision in my "Quotes and Provocations" piece entitled "Who Is Embedding Whom?" His remarkably well-tempered response immediately made me painfully aware that I hadn't properly disentangled my judgments about the article from judgments about the author--and that on the latter score I had undoubtedly blown it. I am sorry for this. I thought Kline's message to me was also worth sharing with readers. First, however, here are the last few sentences from the article to which he refers in his message:

    When it comes to an enterprise as far reaching in its social and economic implications as the so-called information highway, it is imperative that we make conscious choices about how and in whose interests it will be financed, built, and operated. We are dealing, after all, with a technology that has the potential to become either a profoundly liberating and revitalizing force in society or a grave threat to personal liberty and the human spirit.

    It is not recommended that we trust the outcome solely to the corporate accountants and investment bankers.

    The article is drawn from Road Warriors: Dreams and Nightmares along the Information Highway, co-authored by Kline and Daniel Burstein.
    
     *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *
    
    
    From: David Kline 
    
    To: stevet@netfuture.org
    Subject: Re: NETFUTURE #39

    Steve,

    I'm a columnist myself, so I well understand the joy of nailing someone for something they've written. Only rarely do I get it wrong -- and that's usually when I'm criticizing someone for something they did NOT say.

    I think you've made the same mistake re: my "Embedded Internet" article -- i.e., criticizing me for what I failed to mention -- and I think you'd really be quite surprised to see where I really stand on social issues. I didn't get canned from HotWired and then booted off the masthead of Wired as contributing writer for nothing, you know. If nothing else, I'm known for fighting techno-utopians like George Gilder and Wired.

    What's my view on the relationships between future possibilities and social and economic realities, between people and technology, and between a marketplace and a genuine civilization? Check out:

    www.hotwired.com/wired/4.01/departments/electrosphere/government.html

    Like I said, you'd be surprised.

    Anyway, there's no need to make any sort of correction. Just thought that, like me, you'd appreciate knowing where you might have made an error.

    David Kline

    
     *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *
    
    
    From: stevet@syntax (Steve Talbott)
    
    To: David Kline
    Subject: Re: NETFUTURE #39

    David --

    > Anyway, there's no need to make any sort of correction. Just thought
    > that, like me, you'd appreciate knowing where you might have made an
    > error.

    Well, given the absolutely perfect message you composed at what must have been a rather vexing moment, I'm inclined to think I must have made an error.

    I've downloaded your essay for more convenient reading. Meanwhile, I would like to publicly acknowledge your gracious response, and the best way seems to me to be to print your message, in toto, in the next issue of NETFUTURE. (And not in order to stick pins in it!) May I have your permission to do this?

    Beyond that, I have to say that your message arrived precisely at a moment when I was descending into a bit of private agonizing and questioning over just what it is I'm doing with NETFUTURE. I'm not sure what will come out of it, but you sure as heck have helped to congeal my thoughts. You have my thanks.

    Steve

    Goto table of contents


    *** Is Technological Improvement What We Want? (Part 2)
    
    From Steve Talbott (stevet@netfuture.org)
    

    In Part 1 of this series I tried to show how technical improvements in the intelligent machinery around us tend to represent a deepened threat in the very areas we began by trying to improve. This, so long as we do not recognize it, is the Great Deceit of intelligent machinery. The opportunity to make software more friendly is also an opportunity to make it unfriendly at a more decisive level. I illustrated this by citing:

    The underlying problem, I suggested, was a mismatch between the technically conceived improvements and the level at which our real problems occur. There are many other places to look in order to illustrate this mismatch. But in this installment, I have chosen to inquire whether the problem is reflected in programming languages themselves.

    
                  IS TECHNOLOGICAL IMPROVEMENT WHAT WE WANT?
    
                       The Worm Was Already in the APL
    
    My point has been that a technical advance typically sharpens the challenge that was presented to us by the original technical limitation. It is not that our situation must worsen. But our predilections toward abuse of the technology, as expressed in the earlier problem, must now be reversed in the face of much greater temptation. Where we were failing with the easier challenge, we must succeed with a harder one. The company in possession of a new generation of telephone-answering software must look to its mission statement with redoubled seriousness.

    But the best intentions are difficult to execute when the Great Deceit is built into the software itself. We need to recognize the deceit, not only in the various software applications, but in the essence of the software enterprise. Software, of course, is what drives all intelligent machinery, and it is created through the use of programming languages. Perhaps the greatest single advance in programming occurred with the switch from low-level to high-level languages. Did this switch amount to progress, pure and simple, or can we recognize the Deceit here at the very root of the modern technological thrust?

    The lowest-level machine language consists of numbers, representing immediate instructions to the computer: "carry out such-and-such an internal operation." It's not easy, of course, for programmers to look at thousands of numbers on a page and get much of a conceptual grip on what's going down. But through a series of steps, higher-level languages were created, finally allowing program code that looks like this:

          do myexit(1) unless $password;
          if (crypt($password, $salt) ne $oldpassword) {
              print "Sorry.";
              do myexit(1);
          }
    

    Each line of such code typically represents--and finally gets translated into--a large mass of machine code. These more powerful lines may still look like Greek to you, but to the programmer who has struggled with low- level languages, they convey, with read-my-lips clarity, assurance of a drastic slash in the mental taxation of program writing.

    Obviously, high-level languages enhance the programmer's technical power. It is far easier to write code when you can employ the concepts and terminology of the human domain within which the program will function. But this heightened technical power dramatically increases the risks. The more easily we can verbally leap from a human domain to a set of computational techniques, the more easily we fall into the now more effectively camouflaged gap between the two. The telephone company programmer who writes a block of code under the label, "answer_inquiry", is all too ready to assume that the customer's concern has been answered, even if the likelihood is that it has not even been addressed.

    The risk here is far from obvious in all its forms. It derailed the entire discipline of cognitive science, whose whole purpose is to understand the relation between the human and the computational. The derailment finally produced one of the classic papers of the discipline, entitled "Artificial Intelligence Meets Natural Stupidity." In it Drew McDermott bemoaned the use of "wishful mnemonics" like UNDERSTAND and GOAL in computer programs. It would be better, he suggested, to revert to names more reminiscent of machine code--say, G0034. Then the programmer might be forced to consider the actual relationship between the human being and the logical structures of the code.

    As AI progresses (at least in terms of money spent), this malady gets worse. We have lived so long with the conviction that robots are possible, even just around the corner, that we can't help hastening their arrival with magic incantations. Winograd...explored some of the complexity of language in sophisticated detail; and now everyone takes "natural-language interfaces" for granted, though none has been written. Charniak...pointed out some approaches to understanding stories, and now the OWL interpreter includes a "story-understanding module." (And, God help us, a top-level "ego loop.")

    McDermott wrote those words in 1976. But while the problem is now almost universally acknowledged, it remains endemic to the discipline, subtly eluding even the efforts by more philosophically minded practitioners to impose conceptual rigor upon the field.

    There is no question that high-level languages represent technical progress. The programmer gains vastly greater power to program. But this power arises from an ever more illusory match-up between the routine "speech" of the programmer and the terms of real life. As I have begun to suggest already and will argue further, more and more of human existence disappears into the abyss hidden beneath the illusion. As we adapt to the programmatic structuring of our phone calls, we get better at reconceiving our business according to the predefined categories of the answering system; at the same time, we learn not to bother with nonconforming calls. Our world shapes itself to the software. Eventually, the programmer's "answer_inquiry" becomes what it means to answer an inquiry.

    Goto table of contents


    *** About this newsletter

    Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.

    NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .

    Current and past issues of NetFuture are available on the Web:

    http://netfuture.org/

    To subscribe or unsubscribe to NetFuture:

    http://netfuture.org/subscribe.html.
    Steve Talbott :: NetFuture #40 :: February 5, 1997

    Goto table of contents

  • Goto NETFUTURE page