• Goto NetFuture main page
  •                                  NETFUTURE
    
                        Technology and Human Responsibility
    
    --------------------------------------------------------------------------
    Issue #133                                                   June 25, 2002
    --------------------------------------------------------------------------
                     A Publication of The Nature Institute
               Editor:  Stephen L. Talbott (stevet@netfuture.org)
    
                      On the Web: http://www.netfuture.org/
         You may redistribute this newsletter for noncommercial purposes.
    
    Can we take responsibility for technology, or must we sleepwalk
    in submission to its inevitabilities?  NetFuture is a voice for
    responsibility.  It depends on the generosity of those who support
    its goals.  To make a contribution, click here.
    
    
    CONTENTS
    ---------
    
    Editor's Note
       On Conversing with Kevin Kelly
    
    Are Machines Living Things? (Kevin Kelly and Stephen L. Talbott)
       Of rats and cyborgs
    
    DEPARTMENTS
    
    Correspondence
       Grief for Mechanized Rats (Michael Knowles)
       There Are Good Uses for the Computer in Education (Roger Palfree)
       Distance Education Is Alive and Well (Jan Whitaker)
    
    About this newsletter
    
    
    ==========================================================================
    
                                  EDITOR'S NOTE
    
    
    On Conversing with Kevin Kelly
    ------------------------------
    
    Two people caught up in an argument can find themselves in either of
    two very different situations.  If they share most basic assumptions,
    so that it is easy to arrive at clear-cut, mutually accepted definitions
    of terms, then facts and logic come to the fore.  The argument is mostly
    a matter of filling in missing information and identifying unrecognized
    inconsistencies in one position or the other.
    
    But there is a whole other challenge when the disputants do not share
    assumptions and their meanings are subtly divergent.  Then, before the
    task of engagement can be reduced to a factual and logical exercise, each
    person must find a way to make his terms conceivable to the other.
    This typically requires the use of metaphor; only through a kind of
    cognitive leap can we grasp a new meaning.
    
    Francis Bacon summarized the two cases this way:
    
       Those whose conceits are seated in popular opinions, need only but to
       prove or dispute; but those whose conceits are beyond popular opinions,
       have a double labor:  the one to make themselves conceived, and the
       other to prove and demonstrate.  So that it is of necessity with them
       to have recourse to similitudes and translations [that is, metaphors]
       to express themselves.
    
    I think you will agree, after reading the feature in this issue, that
    Kevin Kelly and I face the problem of making ourselves conceived by each
    other.  What happens in all such cases is that the two sides risk
    continually "talking past each other", because they are employing
    different languages.
    
    But while this may sound like a fruitless exercise, it can be much more
    useful to an active, critical reader than the more routine case where
    fundamental meanings are shared.  After all, there is less cognitive gain
    in recognizing a logical slip than in discovering an aspect of reality, a
    point of view, a quality of the world you have never glimpsed before.
    
    But the gain requires a great deal of work.  In particular, it requires
    the extraordinarily difficult metaphoric leap that carries you from the
    more familiar perspective to the other, still-rather-foreign perspective.
    Until you gain a full appreciation of both worlds of meaning, you
    can hardly weigh the differences between them.  And there is no mere
    accumulation of information that can bring this appreciation.  Information
    is more like the problem to be overcome, since it already embodies the
    reigning assumptions and meanings — Bacon's "popular opinions".
    Moreover, it embodies these in their most coercive, because least
    conscious, form.
    
    I hope you find the exchange in this issue of NetFuture worthy of your own
    effort to search out new meaning.
    
    SLT
    
    Goto table of contents
    
    
    ==========================================================================
    
                           ARE MACHINES LIVING THINGS?
    
                        Kevin Kelly and Stephen L. Talbott
                         (kk@kk.org; stevet@netfuture.org)
    
    In NetFuture #132 I wrote about recent experiments with rats.  Scientists
    implanted electrodes in the brains of these animals and were then able to
    steer them at will by sending electrical signals.  The rats could even be
    made to navigate areas they would otherwise avoid.
    
    I juxtaposed this news item with another about efforts to make robots more
    human-like and "companionable".  In response to my comments Kevin Kelly
    wrote the following brief note, which then kicked off another round of our
    ongoing dialogue.  For the previous installment, see NetFuture #130.  SLT
    
    
                              ---------------------
    
    KEVIN KELLY:  You wrote,
    
       Why in the world would we want to engage in the ditzy exercise of
       pretending a robot is a living being, when we're also engaged in the
       dead-earnest exercise of converting living beings into robots?  But I
       guess the real meaning of both exercises is the same:  to train
       ourselves in losing awareness of any distinction between robots and
       living beings.
    
    That's the negative way of saying something extremely positive: "to train
    ourselves in gaining awareness of how robots and living beings are alike."
    See Out of Control for a whole book on how machines and organic
    beings are two examples of the same phenomenon.  This is not just
    philosophical. In about 100 years or less there won't be much difference
    between certain engineered life and certain life-like machines. Instead of
    binary distinctions the range of beings will be one continuum, as your
    posts make clear is already happening. To clarify, there will always be
    many beings joyfully inhabiting the extreme ends of the continuum; wild,
    organic, "natural" life at one end, and cold, steely, dumb mechanical
    machines at the other. But more and more of the world will be comprised of
    the 'tweenings:  life that has been given some of the control of machines,
    and machines that have been given some of the freedom of life.
    
    
                              ---------------------
    
    STEVE TALBOTT:  Yes, one can legitimately investigate what is machine-like
    in the human being.  But when for several hundred years a culture has
    progressively lost its ability to see what is non-machinelike in the human
    being; when technology veils from view the natural world so that almost
    all our activity is mediated in one way or another by machines; when
    "official" science proscribes research that in any way transcends a
    mechanistic model, making such research unfundable; and when all this
    commitment to mechanism leads to a continual flirting with environmental
    disaster for the entire living earth — well then, I don't understand
    why your further promotion of the limiting, machine-human analogy should
    be seen as the positive stance, while my suggestion that we recover a
    fuller understanding of the human being and of nature is dismissed as the
    negative stance.
    
    Savoring the satisfactions of the inventor, you talk about the hybrids we
    can make, but say nothing about the creatures caught in the cross-fire of
    our making.  What about the plight of the remote-controlled rat?  And what
    about the human being who is subjected to the same values and practices so
    clearly evidenced in our treatment of the rat?
    
    
                              ---------------------
    
    KK:  Regarding your first paragraph:  I can't put my finger on it, but I
    found this the most persuasive argument you've made yet in my hearing.
    This may work for me because you implicitly acknowledge the machine view
    — which is really the point of my responses.  I will quibble with
    certain phrases, but I can let them pass.  What I hear — and correct
    me if I am wrong — is a plea to restore a balance in our views; to
    resurrect the non-machine view of humans, of science, of our surroundings,
    etc, so that this well-seasoned, wise, and powerful view might stand as
    high in our esteem as the mechanical view, which has ascended with the
    rise of technology.  Where I may part with you is if you claim the organic
    is a superior view.
    
    I am not sure who speaks for rats other than rats themselves.  If I was a
    rat I might enjoy having an implant for the sheer novelty of it, just as I
    am certain many humans will take implants themselves so they can
    experience something different, or to extend their sense of self.  Of
    course no human knows what a rat thinks or feels, but with implants we
    actually may know one day.  Rats may want to have nothing to do with us
    and be left alone, or they may want to have everything to do with us and
    want a chance to be something different.  Or both.
    
    Certain humans will benefit tremendously by the experiments we are now
    conducting in rats, just as many humans benefit tremendously by the
    experiments we conduct on humans.  It is not difficult to imagine science
    discovering a way to enable someone crippled to walk using the kind of
    results initially found by implanting electrodes in the brains of rats.
    There is some benefit to rathood itself in this research, too, although
    those benefits don't apply to all rats.
    
    
                              ---------------------
    
    ST:  I sympathize with your interest in preserving multiple views of a
    thing.  That is exactly my own concern.  The problem with your "mechanical
    view", in fact, is that historically it represents the refusal to
    accept other, fuller views.  Those who say "the organism is a machine"
    have said it precisely in order to deny that the organism is anything more
    than a machine.  Therefore they have had to use twisted, inadequate
    concepts so as to reduce the organism to their preconceptions.  How do you
    borrow these twisted concepts without also borrowing their untruth?
    
    Let me give an example.  It was all too natural for proponents of the
    mechanical view to imagine the organism as compounded of reflexes, and to
    search for clear, simple reflexes as paradigm cases.  The underlying idea
    is that "the organism represents a bundle of isolable mechanisms that are
    constant in structure and that respond, in a constant way, to events in
    the environment (stimuli) .... The aim of research, according to this
    conception, is to dissect the behavior of the organism in order to
    discover those 'part processes' that can be considered as governed by
    mechanistic laws and as unambiguous, elementary reactions to definite
    stimuli".
    
    Those are the words of the eminent neurologist, Kurt Goldstein, and it was
    Goldstein whose classic work on The Organism demolished the reflex
    theory.  Goldstein showed that slight changes in the intensity of a
    stimulus can often reverse a reflex; a reflex in one part of a body can be
    altered by the position of other parts; an organism's exposure to certain
    chemicals can reverse a reflex; other chemicals can completely change the
    nature of a reflex; fatigue can have the same effect; consciously trying
    to repress a reflex can accentuate it (try it with your "knee-jerk"
    reflex); and so on without end.
    
    Needless to say, Goldstein's work was largely ignored by a science bent on
    mechanistic reductions.  (The recent re-issue of The Organism with
    an introduction by Oliver Sacks is an encouraging development.)  But
    Goldstein's point nevertheless stands:  the machine view of the organism,
    which assumes that parts can be isolated from the whole and satisfactorily
    elucidated in that way, continually falsifies our understanding.
    
    So, yes:  I certainly do want to say that an organic view of animals and
    humans is superior to a mechanistic view — overwhelmingly superior.
    It is superior because it avoids the radical untruth of the mechanistic
    view.  If there is no such thing as a "reflex mechanism", it is because
    there is no such thing as a mechanical organism.  The most we can say is
    that, by doing our best to isolate certain parts of the organism from the
    rest, and by viewing only restricted aspects of the part's functioning,
    and in general by ignoring everything that makes the organism an organism,
    we can arrive at extremely rough approximations to various mechanical
    elements.  These approximations may indeed be instructive for some
    purposes, as long as we continually remind ourselves of their limitations.
    But you will be frustrated if you try to find any such reminders in the
    conventional literature.
    
    Finally, I am disturbed by your casual unconcern for that remote-
    controlled rat.  It's not that I think one can formulate any absolute
    rules about what is permissible in such situations.  But the only way for
    us to gain a basis for decision-making is to approach the rat as best we
    can on its own ground and enter into respectful "conversation" with it.
    (See "Ecological Conversation" in NF #127.)  As things are, I have this
    half-comical image of a technician sitting at his keyboard and enjoying
    the exhilarations of a video game as the frenetic rodent he is controlling
    compulsively dashes across the floor this way and that — with you on the
    sidelines blithely commenting, "Maybe the rat is enjoying it".  This is
    not the conversation I have in mind.
    
    I suggest the following as a possible starting point for any assessment of
    the experiment's meaning for the rat.  Every organism strives to express
    its own wholeness; its health entails being more or less at one with
    itself so that what is going on in each part harmonizes with and gives
    distinctive voice to what is going on throughout the whole.  But the
    effect of the robo-rat experiment is to set the rat at war with itself
    — to forcibly create, in violation of the unity of its being, a set
    of response mechanisms sufficiently isolated from the whole as to allow
    effective manipulation from outside.
    
    At first blush, this manipulative, arbitrary, and disruptive invasion
    seems hard to reconcile with any respectful stance toward the rat as a
    being with its own meaning and its own coherent life to live.  But, of
    course, this problem is completely hidden from anyone assuming the machine
    view, because a machine just is a collection of isolable mechanisms
    without any particular meaning or wholeness of their own.
    
    
                              ---------------------
    
    KK: "Every organism strives to express its own wholeness".  I find myself
    disagreeing with this, your most fundamental assumption.  I don't disagree
    with it because it is mystical, which it most certainly is.  (If it is
    not, I don't know what a mystical statement would be.)  I disagree with it
    because I find it meaningless, or I should more politely say, because I
    can't find meaning in it, that is I don't understand what you are trying
    to say by saying it.  It would be far more accurate to say "Every organism
    strives to survive."  That would be true.  When I think of a hydra
    squirming in a pond, that is what it is striving to do.  I would even go
    on to agree with this statement: "Every organism strives to maintain its
    own wholeness," as in, keeping its systems intact.  I'm just not sure what
    it means for a lichen to "express" its own wholeness.  When you say "its
    own" do you mean of this particular piece of lichen, or of this particular
    kind of lichen?
    
    I am beginning to suspect that "wholeness" to you is a vital force, that
    mysterious X factor that living things supposedly have which non-living
    things don't.  I would be willing to bet that you don't believe that a
    machine can have "wholeness of its own" or certainly that if it did, that
    it could not ever "express its own wholeness."  Wholeness is your code
    word for the differential separating life from non-life.  Is this correct?
    
    
                              ---------------------
    
    ST:  I certainly do take wholeness — or, rather, a particular sort of
    relation between part and whole — as offering one way to distinguish
    living things from non-living.  This does not, on my view, have anything
    to do with a vital force or mysticism.
    
    But may I delay a rather lengthy explanation so as not to let drop an
    earlier remark of yours?  You said that the electronically manipulated rat
    might lead us to the kind of knowledge enabling the lame to walk.  This,
    of course, is the standard sort of advertisement for prospective
    technologies.  You can always propose some such good.  I don't
    believe it's possible to conceive any technology, however horrendous, for
    which we cannot imagine a good use.  Clearly, such imaginings by
    themselves are not enough to guide us through the thickets of
    technological choice we face today.  There is no criterion here for
    rejecting any particular choice.
    
    I don't see how to assess the rat experiments except by beginning with the
    relationship between the experimenters and the rats themselves.  If this
    relationship has a moral dimension, then that must be our starting point.
    But perhaps you are of the view that there is no moral dimension here,
    because the rat is a machine?  If so, I would like to know it.  If, on the
    other hand, we both grant the moral aspect, then in making our judgments
    we need to keep in mind how dangerous it is to commit a moral abuse in one
    place in order to gain a benefit in another.
    
    We cannot adequately justify the rat experiments either by professing
    ignorance of the rat's being (ignorance would only counsel us not to act
    unnecessarily) or by imagining future benefits.  The imagining may be
    helpful, but we also have to enter deeply and sensitively into our
    transactions with the rat itself and ask, "What are the moral qualities of
    this exchange between two beings?"
    
    I'm not suggesting that the answer will be simple.  I'm only pointing out
    that the question isn't even being asked.  And it doesn't seem to me to be
    present in your responses.  Am I missing something?  Or is this absence
    required by the "machine view"?
    
    
                              ---------------------
    
    KK:  You ask, "What are the moral qualities of this exchange between two
    beings?"  It is a good question.  But I'd like you to clarify the
    question, because as you are using the term, I suspect that "moral" has a
    circular definition.
    
    Does the following question make any sense?:
    
       What are the moral qualities of an exchange between two machines?
    
    Or is "moral", like "wholeness", only some mysterious quality that can
    occur in what you would call living beings?
    
    So what do you mean by moral dimension?
    
    
                              ---------------------
    
    ST:  It seems enough for present purposes to note that the world is moral
    so far as it poses questions of right and wrong for us.  Sure, this would
    involve me in circularity, inasmuch as it's probably impossible to define
    "right" and "wrong" without importing some idea of morality.  But this is
    the kind of circularity that every fundamental notion leads to.  (Try
    defining "truth".)  The fact is that almost everyone has at least some
    sense of morality, or right and wrong, and that's what I'm appealing to;
    it's as much an elementary "given" for us as the brute fact of the
    perceptual world is a given.  And, no, I don't think machines confront
    moral issues — because nothing is "given" to them in this
    sense; they are not conscious.
    
    But leave my views aside for the moment.  I would like to know how
    you, according to your own definitions, view the moral qualities of
    the exchange between those scientists and their rats.
    
    
                              ---------------------
    
    KK:  Our moral responsibilities toward other beings scale to the
    complexity of their being, and are in proportion to their own abilities to
    be moral.  We have a different relationship and obligation to an e. coli
    germ than we have to a gorilla, and different again to a robot or a tuft
    of grass, because these beings all have different capacities of action,
    communication, contemplation and awareness, and because they have
    different relations to us.  We use rats for research because, as mammals,
    they share some of our cognition, but we also use them because they are
    among the simplest beings that share anything significant with us.  We owe
    the particular individual rats we use as little suffering as we can
    manage, and the maximum comfort we can afford.  We owe the rat race the
    opportunity to evolve.
    
    
                              ---------------------
    
    ST:  I would answer rather differently, but — fair enough.  Picking
    up on "opportunity to evolve":  I don't see how this can have much meaning
    without some sense for the kind of being the rat is.  That is, our
    understanding of where rats might reasonably go, evolutionarily, depends
    on our understanding of who they are.  And this leads right back into our
    dispute over the nature of the organism.  So I see no alternative to my
    addressing at length your basic concern that I'm founding everything on
    some vague, inaccessible sphere of mystery from which I magically produce
    concepts like "wholeness", "organism", and "morality" on demand.  So be
    it.
    
    Taking strong issue with my statement that "every organism strives to
    express its own wholeness", you prefer the alternative, "every organism
    strives to survive".  But what does your formulation say, beyond "every
    organism strives to keep on striving"?  It's an empty statement.  Yes,
    it's true:  any organism that got into the habit of gnawing on its own
    liver or dancing into the lair of its predators is not likely to be here
    today.  To some degree or another all traits have to pass the negative
    test of not leading the organism to destruction.
    
    But this tells us nothing about the positive character of the organism.
    Conventional biology avoids this problem simply by not looking at
    the character of the organism.  Such looking requires a qualitative and
    non-mechanical approach and, whether we like it or not, if this approach
    presents us with a coherent organism — an organism in which,
    physiologically, morphologically, and behaviorally, every part positively
    "speaks" in the unified and distinctive voice of that species — well,
    then, the standard biological explanations based on random mutation and
    natural selection are hopelessly inadequate.  You can no more get from
    those explanations to the qualitative unity of the organism than you can
    get from the distinctive style of Van Gogh to that of Picasso through a
    random and mechanical process of pixel substitution.  You can, of course,
    deny that organisms have any such unitary character, but this is hardly
    seemly when you have refused to look at the organism with the qualitative
    eye of the artist.
    
    (NF #97 contained an example of a "whole-organism study" of the sort that
    aims to get at this qualitative unity.  The piece, written by Craig
    Holdrege, was entitled, "What Does It Mean to Be a Sloth?".)
    
    All of which leads to your question:  Is "wholeness" my code word for
    whatever separates life from non-life?  I wouldn't say "code word";
    "wholeness" is the proper name for an idea that can be reasonably
    explicated.  The particular view of wholeness I will articulate traces
    back to Kant, and was worked over by Goethe, Coleridge, and Rudolf
    Steiner.  Here is how I would summarize certain insights stemming from
    this tradition:
    
    Think first of human speech, or a text.  In reading the initial words of a
    sentence or paragraph, we find ourselves immediately grasping for the
    thought of the whole.  ("What is this about, and where is it going?")  As
    we proceed, and as the meaning of the whole comes into ever clearer focus,
    we discover this whole working into and transforming the individual words
    we read.  It is well known that you cannot make sense of any profound text
    "from the bottom up", simply by importing the dictionary definitions of
    the words and adding them together.  These words have to become the
    bearers of a governing idea or intention that now shines through them.
    And, by virtue of its participation in this intention, each new word in
    turn shines through all the other words, subtly shifting their meanings.
    So you don't have neatly given, determinate parts (words) entering into
    purely external relationships.  The part itself only comes into existence
    — that is, only becomes this particular part, or word — through
    the expressive agency of an antecedent whole (the meaning of the passage).
    Until then, the word has no adequate definition.
    
    This points us toward the kind of dynamic relation between part and whole
    characteristic of organisms.  A key point is that an organic whole
    manifests itself within each of the parts; they only become what they are
    by virtue of the activity of an antecedent whole.  Of course, this begs
    for illustration from an actual organism.  But for the moment let me
    explain further by contrasting this organic principle of wholeness with
    the mechanistic one.
    
    The organizing idea of the machine — its functional wholeness —
    is imposed from without through the arrangement of parts whose nature
    remains static.  These parts are not transformed through their
    participation in the whole.  We arrange them with the overall idea
    in mind, and the resulting external relationships between the given, well-
    defined parts are sufficient to specify the machine.  That is, everything
    we need to know in order to understand the determinate functioning of the
    machine is available to us in the evident relations we have given to the
    parts.
    
    So the machine does have a wholeness of its own peculiar sort, but we are
    the ones who have "striven to express it".  Further, its parts remain
    precisely what they are even without reference to the functional idea of
    the machine as a whole; remove a part from the machine, and its external
    relations will be lost, but the part itself will remain essentially just
    what it was.
    
    In Coleridge's pithy summary:  what is "organized" from without is a
    mechanism; what is "mechanized" from within is an organism.  And NetFuture
    reader Peter Kindlmann, a professor of engineering design at Yale
    University (whom I would not want to saddle with all my own views) was, I
    think, getting at a crucial aspect of the matter when he described how the
    engineer partitions "a larger whole into functional modules, each
    described by an input/output 'cause and effect' behavior".  There you see
    the machine conceived (correctly) as a collection of parts (modules) with
    clearly defined external relations.  I would add that every machine,
    "modularly designed" or not, must, at some level, be analyzable in exactly
    those terms.
    
    But, Kindlmann continues, "nature does not 'design' this way".  Instead,
    it offers
    
       a total fusion of function and form that we are right to admire
       aspiringly, but can seldom take as a direct lesson [for engineering].
       A blade of grass is a totally integrated system of structure, fluid
       transport and chemical reactor.
       (http://www.yale.edu/engineering/eng-info/msg00807.html)
    
    This "total fusion of function and form" signifies that there is no way to
    partition the organism into cleanly separated modules whose purely
    external relationships tell the entire functional story of the organism.
    The parts interpenetrate each other, and do so in a manner whereby the
    whole is revealed as active within each part.
    
    Is any of this mystical or meaningless, as you suggest?  Or is it just
    that the mechanical narrowing of one's vision leads to a premature
    dismissal of those aspects of the world invisible to a mechanical mindset?
    
    One other thought.  No whole can be wholly material.  If it were, it would
    be just another part among the others, or the mere sum or aggregate of all
    the parts.  If those (such as complexity theorists) who speak of the whole
    as being more than the sum of the parts really meant what they say, they
    would grant that, once you have removed all the parts, the whole remains;
    what was more than the parts remains.  But few of them will say
    this, so powerful are the reigning mechanical habits of thought.
    
    
                              ---------------------
    
    KK:  I think we are getting someplace.  It's clearer where we disagree.
    
    I have no argument at all with concepts like this:  "A key point is that
    an organic whole manifests itself within each of the parts; they only
    become what they are by virtue of the activity of an antecedent whole".
    
    The emergence of meaning in a text, or of health in an organism presents
    the similar vision in me.  An organism does not reside in the parts but in
    the totality which transcends the part — the whole.  Yes, this is how
    it is.  Our real split, Steve, is your insistence that this kind of
    wholism can't happen in human-created systems, but only in natural ones.
    I questioned your definition of wholism not because I don't agree with it
    or believe in it, but only because it is evident to me that it occurs in
    machines as well, while you deny that, which makes me wonder what you mean
    by the word.  Yet everything you say about it makes it clear the same
    thing happens in artificial systems.
    
    Your example of a machine is very primeval — an "arrangement of parts
    whose nature remains static."  Sure that's how primitive machines are, but
    that is not to say, how the web is, or how law is, or how even a complex
    factory is.  In several chapters in Out of Control I tried to
    demonstrate the many ways in which machines 1) could and are made of
    dynamic parts that 2) self-assemble, and 3) can even evolve and therefore
    are beyond not only human engineering but human understanding.  Therefore
    your statement that "everything we need to know in order to understand the
    determinate functioning of the machine is available to us in the evident
    relations we have given to the parts" is absolutely and fundamentally
    wrong.  It is not just theoretically wrong, but practically wrong.  We
    already write software (a machine if there ever was one) that nobody
    understands, that nobody designed, and that nobody has cataloged the parts
    of.  More importantly, it's clear that this trajectory in machines will
    continue so that machines (it's almost a refrain) take on yet another
    aspect that we thought only living organisms had.
    
    Here is another example:  "Remove a part from the machine, and its
    external relations will be lost, but the part itself will remain
    essentially just what it was."  Also not true.  There are very complex
    machines which can route around removal of parts, just as organic
    organisms do.  Again, you reference simple primitive machines that do
    indeed share only a little with life.  But my arguments concern the
    increasingly complex machines that share much with life.
    
    The drift of your arguments remind me of the Roman notion of "order".  The
    Romans and most classical cultures of antiquity were obsessed with order.
    There was a natural order to things and to people, which everyone
    accepted.  This was not about rank, but about classifications.  It was
    unnatural for a servant to lecture a nobleman, just as it would be
    unnatural for a mouse to try to be a bird.  Most of this classical sense
    of order disappeared from our modern world except in the area of race,
    where it remained far too long.  There was a time not too long ago when
    otherwise nice people could say in all sincerity that it was simply
    unnatural for a white and black to marry.  The two were separate
    categories, and even if it was possible for them to marry, it was against
    the order of things for them to do so.
    
    I am arguing against separatism in beings.  The classic view is that there
    are separate classes of things:  those born and those made, those of life
    and those of machines.  Supporters of this separatism offer all kinds of
    rational arguments why these two categories are very distinct, why they
    should be treated differently, and why if they might be brought together
    (as say in genetic engineering) this is horrible and unnatural.
    
    The arguments for why living beings are different from created beings
    range from, "living beings are trying to express their wholeness while
    created beings don't" to "parts of an organism interpenetrate each other,
    and do so in a manner whereby the whole is revealed as active within each
    part, while in a machine they do not."  These reasons can seem as far
    fetched to me as some of the reasoning of old as to why white people were
    categorically different from blacks.
    
    This separation bias against machines goes even further.  It is clear that
    even as machines become more biological-like, as they take on more of the
    characteristics of living things while they become as complex as living
    things, their natural abilities are denied.  Even in the face of the
    obvious intelligence and morality of a black person, a white slave owner
    could deny the black had a human intelligence and consciousness because
    everyone knew that a negro could not — by definition — have
    those qualities.  In the same way the stirrings of intelligence and
    purpose in machines are denied because humans know that — by
    definition — a gadget can't have consciousness or morality.  And it
    is true, that by those definitions, as seen above, it can't.
    
    There are many elevated arguments against racism, but one mechanical one
    will do here.  It is clear there are very pale humans and very dark ones.
    Black and white skins exist.  But most people of the world are brown.
    They have some shade of pigmentation.  There is no pure white, just as
    there is no pure black.  Black/white racism falls apart when brown fills
    the world.
    
    There are some very organic beings full of life, like humans.  And there
    are some very inert machines like a hammer.  But our world is filling with
    many brownish things half-way between. We know of viruses and prions,
    organic entities born, but not quite living.  And now we have complex
    systems made, but not quite dead. We have embryonic AI.  We have
    artificial evolution.  We'll have a world of dry engineered life and soft
    machines.  What we call life is a continuum that extends into other
    elements beside carbon.  What we call intelligence extends into other
    realms beside tissue.  What we call morality extends into other beings
    beside humans.
    
    We can stop this only by playing with the definitions.  Indeed that is
    what has been happening.  There is an ad-hoc common sense definition of
    intelligence, or life.  When some contraption or other meets that
    definition, then we say, well obviously our definition is incomplete
    because obviously what this contraption is doing is not thinking or
    living.  Obviously blacks or machines can't be intelligent or moral.
    These qualities — by definition — don't apply to them.  So we
    keep shifting definitions.
    
    A more moral way would be to extend the definition.  In the long term
    — say in another couple hundred years — I believe that humans
    (whatever they are by then) will look back to now and marvel at our
    insistence in keeping machines and organisms separate.  Not because the
    convergence will be seen by them as inevitable, but because morally our
    separatism will be repugnant.  Why did we think only humans could be
    conscious and moral, and why would we not want to bring consciousness and
    morality to as many machines as we could?
    
    It is the obvious fact that machines and living things can co-
    mingle that has brought out the separatists. The prospect of cyborgs stirs
    up fears of mutant descendents, and this is when we begin to hear the
    strong sermons on why organic beings are fundamentally different from
    human-created ones. They can no longer argue that there is a fundamental
    difference, because hybrids abound, only that it is unnatural and against
    the natural order to mingle the two. We should expect to hear more like
    this as this continuum of being between life and machines becomes yet more
    evident.
    
    Your last point:
    
       If those (such as complexity theorists) who speak of the whole as being
       more than the sum of the parts really meant what they say, they would
       grant that, once you have removed all the parts, the whole remains;
       what was more than the parts remains.  But few of them will say
       this, so powerful are the reigning mechanical habits of thought.
    
    I will say it.  There is something mysterious and immaterial about the
    whole.  The important point is that this wholeness is shared by the world
    of the made as well as the world of the born.
    
    
                              ---------------------
    
    ST:  I am perplexed to hear you so vigorously associating my stance with
    racism and with empty, rationalizing argument by definition — not to
    mention continually "shifting" definition.  I'm not sure what relation any
    of this has to my own offerings, and will therefore leave it alone, except
    to say this:  my entire line of thought is intended as part of an effort
    to grasp how things are, not how they must be according to
    some preconceived definition.  I have attempted to elucidate what we can
    see with our senses and our understanding.
    
    As for your direct response to my argument:  I am fully aware of all the
    buzz about dynamic assemblages that "self-organize" and "evolve".  This
    kind of talk was, in fact, uppermost in my mind when I described the
    mechanistic relation of part and whole, because these trendy assemblages
    perfectly fit the bill.  The cellular automata of the "artificial life"
    enthusiasts, the complex adaptive systems of John Holland, and the
    autocatalytic sets of Stuart Kauffman are, to my mind, the quintessential
    mechanisms.  They bring to a kind of perfection what was inherent in the
    notion of a material mechanism from the very beginning.  Far from
    overriding my distinction, they are the best illustrations of it.
    
    One does not move beyond the mere aggregation of fixed, externally
    relating parts by making the parts very small.  The individual cells of a
    cellular automaton are the perfect little mechanisms.  Yes, as is common
    with machines of all sorts, when you assemble these elements you can get
    them to change their configuration and do various things, but these
    rearrangements and activities (including the often magically conceived
    "self-organizing" activities) proceed in full accord with the distinctions
    I have drawn.
    
    Certainly all such devices embody a great deal of intelligence — and
    ever more so.  Machines always have.  There is no limit to the
    intelligence we can invest in them.  This, I suspect, is what leads you to
    pass far too quickly over my point, which is that intelligence, in the
    form of the device's organizing idea, is imposed from without — a
    fact that manifests itself in a distinctive sort of relationship between
    whole and part.  For example, the organizing idea is "external" to those
    individual cells of the automaton — they are not formed and enlivened
    by it from within, but are just put into varying external relationships.
    There is always such a rock-bottom lifelessness in the machine, which
    betrays itself, not merely at the bottom, but at any level of description
    you choose.  The organism, on the other hand, is enlivened from within,
    which means, among other things:  all the way down.
    
    Unfortunately, we are already over-length for this current installment of
    our dialogue, so I will have to leave your remarks inadequately answered
    for now.  And I'm not quite clear how to proceed, given our continuing
    failure to achieve direct engagement.  (Virtually every point in your last
    response appears to me badly misdirected, just as my comments appear to
    you.)  Perhaps I will need to write a full-length article or two, and then
    invite you to respond.  Or ... ?
    
    Go to the next installment of this dialogue"
    
    Goto table of contents
    
    
    ==========================================================================
    
                                  CORRESPONDENCE
    
    
    Grief for Mechanized Rats
    -------------------------
    
    Response to:  "On Giving Rats a Virtual Life" (NF #132)
    From:  Michael Knowles (mike@mwknowles.com)
    
    Steve,
    
    How do we respond to such behavior?  I could philosophize all day long
    about it, but in the end all I want to do is sit down with some friends
    over a cup of tea and grieve.
    
    Inconceivable ... and where now is our self-respect?  Where now our
    humanity?  There is a part of me that now feels unclean.
    
    Thanks for echoing my horror, Steve.
    
    Time to step out of the maze now.  Perhaps a little Yeats will do....
    
    Michael
    
    Michael Knowles Consulting
    http://www.mwknowles.com/
    
    
    There Are Good Uses for the Computer in Education
    -------------------------------------------------
    
    Response to:  "Sex, the Internet, and the Reform of Education" (NF #132)
    From:  Roger Palfree (roger.palfree@mcgill.ca)
    
    Dear Steve,
    
    I think that it is going a bit too far to suggest all the computers should
    be taken from the classroom and sent back to the manufacturers for
    recycling.  For a start, the manufacturers would not want them because of
    the wonderfully effective conspiratorial strategy of software-driven
    obsolescence.  But more importantly, the computer is used these days even
    by educators when pursuing some aspects of their particular educational
    interests.  It is more important to keep the computers in their place.  It
    is also more relevant, within the context of the "Sex, the Internet, and
    Educational Reform" discussion, to consider disconnection from the
    internet.
    
    A practical approach to having an educational internet while not being
    connected to the real internet would be to have servers dedicated to
    education.  Educator-recommended web sites could be mirrored on that
    server and regularly updated.  There would be no ports open to the world
    as a whole.  Choosing the web sites to mirror (with permission, of course)
    would be analogous to choosing books for the library.  Students who have
    access to the internet from outside the classroom could participate in
    suggesting and justifying the inclusion of new sites.
    
    There are some good uses of the computer, and some internet experience is
    useful.  But they certainly needs to be put down a peg or two, and
    educators need to require a high degree of control over their use within
    the education process.
    
    Roger
    
    Assoc. Prof. McGill U.
    Endocrine Laboratory, L2.05 Royal Victoria Hospital,
    Web page: http://ww2.mcgill.ca/endolab/profs/palfree.htm
    
    
    Distance Education Is Alive and Well
    ------------------------------------
    
    Response to:  "Requiem for Distant Educators" (NF #132)
    From:  Jan Whitaker (jwhit@primenet.com)
    
    Hi, Steve.  I read this with interest because I've been involved in
    "distance education" since the early 1980s.  Of course back then we used
    things like TV, radio, newspapers, print packages, audio cassettes, video
    cassettes, and telephones. This was pre-internet.  In fact, I instigated
    one of the first (I won't be bold enough to say the first) what we called
    "modem based" courses using a dial in to a bulletin board (remember
    those?)  and one of the first international courses between Australia and
    the US using telephone conferencing.  Now that was distance!
    
    I feel it's unfair to put out that this is all dead.  It isn't.  I'm not
    closely involved in the efforts any more to cite statistics, but just
    because the "commercially approached" programs and single focus web-based
    institutes have failed doesn't mean it is all gone.  It isn't.  Talk to
    Community Colleges around the US, the TAFE institutes and state education
    programs in Australia using all types of technology still.  Talk to the
    British Open University.  Talk to the folks in the Netherlands who have a
    huge distance ed program for adult students.  The world is
    alive with education at a distance.  Unfortunately, this edition of
    NetFuture has generalised a bit too far, and I would presume that the
    New York Times writer has also mislead people.
    
    It is critical that the effort continue for educating people in non-
    traditional ways.  Not everyone can afford the money nor time nor travel
    nor stopping out of work to pursue the traditional campus experience.  Use
    of technologies to overcome those barriers is critical for the retraining,
    education entry second chances for mature aged people, and the "life long
    learning" that will sustain societies all over the planet.
    
    I think the announcement of time of death is at least a little premature,
    and a lot misleading.
    
    Jan Whitaker
    Melbourne, Australia
    
    
                              ---------------------
    
    Jan --
    
    Actually, I didn't pronounce the death of distance education; rather, I
    cited an obituary for the "distance education bubble" — an obituary
    that seems reasonably well justified.  On the question of distance
    education's survival, I fully agree with you.  In "Digital Diploma Mills
    Grinding to a Halt?" (NF #81) I wrote that "the technological forces that
    have been re-shaping society will not be deflected by ... the repeated
    failure of silly predictions about 'imminent' revolutions".  I also noted
    that "the essential technologization of education proceeds quite well with
    nary a computer in sight".
    
    Steve
    
    Goto table of contents
    
    
    ==========================================================================
    
                              ABOUT THIS NEWSLETTER
    
    Copyright 2002 by The Nature Institute.  You may redistribute this
    newsletter for noncommercial purposes.  You may also redistribute
    individual articles in their entirety, provided the NetFuture url and this
    paragraph are attached.
    
    NetFuture is supported by freely given reader contributions, and could not
    survive without them.  For details and special offers, see
    http://netfuture.org/support.html .
    
    Current and past issues of NetFuture are available on the Web:
    
        http://netfuture.org/
    
    To subscribe or unsubscribe to NetFuture:
        http://netfuture.org/subscribe.html.
    
    Steve Talbott :: NetFuture #133 :: June 25, 2002
    
    Goto table of contents
    

  • Goto NetFuture main page