Consciousness, Literature and the Arts
Archive
Volume 4 Number 2, July 2003
_______________________________________________________________
Robotic
Theatre In Extremis
Models
for Artificial Intelligence in Robotic Performance: Or, How We Think A Machine
Might Think As A Guide To How It Might Express Itself In A Performance Context
By
Gordon
Ramsay
Meyerhold’s
biomechanic model, whereby the human performer is at once externally controlled
while at the same time paradoxically maintaining a degree of volition, and
Schlemmer’s robotic Kunstfigur, arguably more extreme, making less of a
concession to the human and more to the mechanical, find a scientific and
theoretical advancement in the field of Artificial Intelligence (AI), and in
particular in the work of cognitive scientist Douglas Hofstadter. If Meyerhold
and Schlemmer are concerned with mechanisation of the performing body,
consideration needs to be given to mind as machine and machine as mind in a
performance context. The evaluation and exploitation of contemporary AI models
offer us the opportunity to establish an idealised method by which a robotic
performer, as well as character, might think.
Perceptual
and conceptual qualities are far from explicit but are inherent in Meyerhold’s
and Schlemmer’s models. The Kunstfigur certainly has a conceptual
weight as far as Schlemmer is concerned, as personification “of the loftiest
concepts and ideas” (Schlemmer 1971, p.29), and “concepts such as Power and
Courage, Truth and Beauty, Law and Freedom” (Schlemmer 1971, p.31), but the Kunstfigur
certainly is not envisaged as having concepts of its own. In that sense, the Kunstfigur
is purely emblematic. Meyerhold’s distrust of psychology is well-documented,
and while the actor has “initiative” (Braun 1991, p.201), this depends on
the perceptual and conceptual capabilities of the performer. The overall
approach leans heavily towards the primacy of the director’s concepts and
perceptions and not those of the performer. If we are to contemplate a more
“democratic” robotic theatre in its most extreme form as a future
possibility, and by extreme I suggest that this must involve artificial
intelligence, its plausibility naturally depends not only on a Totalitarian
“idea-transference” from the scientific field, but also on the progress of
robotic - or AI software - per se.
When
we refer to robotic theatre in extremis, we suggest an ideal: a theatre
involving machine/performers programmed to adhere to a written text and to be
“directable”. At the same time, such machine/performers have a latitude for
degrees of intelligent independence and creativity, at least within the
rehearsal period, with this to be determined by the director/programmer[1].
This latitude mirrors the latitude offered by human directors to human
performers in “conventional” theatrical practice. At one extreme, a director
may “block” every movement and every expression for every scene. (In the
film world, Hitchcock springs to mind as a good example of such a director). At
the other extreme, the director may allow the actors such a high degree of
independence, that they are largely responsible for creating virtually the whole
text, performance and written, themselves. (One might argue that Peter Brook
inhabits this end of the scale). This plainly raises a question or two over
independence and creativity. For example, if we state that the two are
co-existent, are we assuming that the machine/performer like the human performer
is independently creative? With a human performer, acting in concert with
others, this is obviously not so - the creativity of one performer directly
affects the creativity of another, either by the mutual “democratic”
sparking of ideas at a rehearsal or performance or by the natural, intended and
sometimes unintended modulations that can occur during performance, when for
example a performer leaves a slightly longer pause than usual. This creativity
may automatically require some creative adjustment on the part of the other
performer(s). If the pause is a misjudgement, or an accident - indeed the
performer may speak the “wrong” word or line - creativity is still required
by their colleague(s) in terms of a capacity for a flexible but appropriate
response. To be creative is not necessarily to be independent, an observation
that at first sight runs counter to Hofstadter’s own observation - “True
Creativity Implies Autonomy” (Hofstadter 1998, p.411), deriving from his
Letter spirit programme (1987). In fact, in citing requirements for creativity,
he refers specifically to design programmes:
The
programme itself must arguably make its
own decisions rather than simply carrying out a set of design decisions all
of which have already been made, directly or indirectly, by a human. (Hofstadter
1998, p.411)
The
stress here is plainly on the establishment of self-reflexivity and independence
as opposed to high external control. Nonetheless, this does not mean that a
design programme cannot be creative alongside other programmes of
artists/programmers. Hofstadter ignores such creative relationships. This is
governed partly by the natural restriction of his theoretical goals but also by
the
fact
that for such a relationship to obtain, a programme’s capacity for independent
creativity must first be established and proved. For our part it is sufficient
to say that when we talk of the machine/performer’s
independence we also suggest the machine/performer’s
independence vis à vis the external control of the director/programmer,
whilst accepting that within such independence there lies the potential for
“mutual” creativity.
As
far as creativity is concerned, the mutual creativity of the machine/performer
finds an approximate translation in the mutual creativity of the
director/designer/lighting designer/writer and others involved in the
performance text. A director, like a performer, can plainly be creative both
“independently” and in concert with others, including with the
machine/performers themselves. Robotic theatre in extremis requires high levels
of independence and external control for the machine/performer and high levels
of absence and control for the director/programmer. This contradictory duality
is accounted for in practice by what we might call a creative allowance, whereby
the relationship between the two has an inherent democratic dynamic. Each, for
example, is free to run with an idea, though the degrees of
control/absence/external control/independence at any given time have a strong
tendency to dictate some sort of tacitly agreed order - in other words, the two
parties do not speak at once and ignore the other, though there will inevitably
be collisions, interruptions etc. The high values ascribed to the in extremis
model are high precisely because they have a high mutual creativity allowance.
In simple terms, they allow for a high degree of creative give-and-take.
A
reasonable objection to this allowance can be made over what might be called
ultimate independence or ultimate control. That is, who, creatively, has the
“final say” - or is the creativity consensual? This will vary from
director/programmer to director/programmer, since latitudes for consensus may be
built-in to the machine/performer’s programme. This finds an equivalent
between human directors and performers, in that there is a contract that is
established (at one extreme by direct order), or one that establishes itself,
whereby “the final sayer” (or “ender” of creative exploration of any
given portion of performance text) is mutually and generally known. This can be
apparently contradicted, however, when broken down to a smaller creative
increment. For example, let us assume that the contract posits the
director/programmer as the “final sayer”. During any creative exploration it
is possible that the director/programmer will say: “No, that doesn’t work.
Go back to the way you did it before” to which the machine/performer can
respond with: “Let me just try this”, whereupon something is tried out that
may or may not “work”. The director/programmer has already apparently had
the “final say” but this is deliberately ignored and a barrier is breached.
If, as far as the director/programmer is concerned, the ensuing piece of
creativity fails, “the final say” may well be repeated, though this time
with an increased finality which the machine/performer is more likely to conform
to (though this is not a guarantee, it is more
likely. The barrier will not
yield so easily). On the other hand, should the new piece of creativity
“work” for the director/programmer, all things being equal, the “final
say” will be surrendered by the director/programmer themselves and taken up
instead by the machine/performer. If there are shades of “final say”, open
(to varying degrees) to constant negotiation, creativity can be said to emerge,
at least, on occasion, from challenges made to the “accepted” hierarchy at
any given time.
This
rather obvious “picking apart” of the creative process regarding notions of
control underlines the fluidity and slippability that may occur within the
theatrical process. This finds a correspondence, as shall be seen, with
Hofstadter’s subcognitive preoccupation and reminds us that the terms used
such as independence, external control, control, and absence, are variables
whose “scale” values are “shiftable” and far from concrete. The
aggregate of these values can be averaged, with the machine/performer, for
example, in general possessing a high intelligent independence value, but this
independence fluctuates during rehearsal or performance. The term
“fluidity”, which hitherto may convey a quality of weakness and
architectural instability, in this sense confers a quality of strength,
representing the creative capacity within the director/programmer and
machine/performer relationship for dynamic and responsive cognitive behaviour,
dependent as it is on the constant establishing and relinquishing of control.
Setting Aside the Physical
At
this point I will set aside physical movement and focus on the primacy of the
perceptual and the conceptual, sub-cognitive processes which are vital to
thought and creativity. This separation at first sight contradicts Robert M.
French’s assertion, in “Subcognition and the Turing Test” (1996),
that the physical cannot be isolated from the cognitive. This assertion arises
as a response to the Turing Test, where the scientist Alan Turing (1912-1954)
postulates that if a human communicates via teletype with an unseen being in
another room, and cannot determine by a series of questions whether or not the
being that answers is a computer programme or a human, then machines/programmes
can be said to possess intelligence. French claims that a human’s cognitive
profile, as regards the human’s relationship to the world around them, is
necessarily imbued with concepts and networks accumulated via physical organs
sensitive, to varying degrees, to a panoply of stimuli.
Consider,
for example, a being that resembled us precisely in all physical respects except
that its eyes were attached to its knees. This physical difference alone would
engender enormous differences in its associative concept network compared to our
own. Bicycle-riding, crawling on the floor, wearing various articles of
clothing, for example trousers as opposed to shorts, and negotiating crowded
hallways would all be experienced in a vastly different way by this individual.
The result would be an associative concept network that would be significantly
and detectably (by the Turing Test) different from our own (French 1996, p.23).
As
far as French is concerned, while this anatomical difference does not register
the “being” as necessarily having low intelligence, the Turing Test would
expose, in the answers to the human’s questions, an abnormality, or “non-matchability”,
in response. As a logical refutation of Turing’s separation of the physical
and cognitive, French’s argument certainly has weight, but it is an argument
more about Turing’s assumptions within the Test itself, rather than one about
a more general relationship between these two aspects. It is possible to be
physically “different” and to still be cognitively human. As Blay Whitby
writes in “Turing Test: AI’s Biggest Blind Alley”:
the
imitation game does not test for intelligence, but rather for other items such
as cultural similarity. (Whitby 1996, p.55)
If
we abstract French’s knee-eye being to, for example, an ordinary man with poor
vision the challenges of bicycle-riding and crowded hallways appear as obstacles
easily identifiable to the interrogator. These obstacles that may be
“uncommon” to the able-bodied, representing as they do “uncommon”
physical difficulties, nonetheless allow for a comparable and identifiable
cognitive ability. More specifically, while it may be said that, for example,
poor vision or blindness (“physical” perception) automatically affects
mental perception, and that physical and cognitive adjustments take place by way
of compensation, it remains true that such disabilities do not make the person
anything other than cognitively human.
The
absence of movement in performance, as some of Samuel Beckett’s work
demonstrates, does not preclude the presence of theatre, and this neatly allows
a restriction of focus to the word and the thought processes behind the word.
Imagine the machine/performers suffering from a physical paralysis that fails to
impinge on their mental functions. What is important is that these performers
are as close as can be got to being inherently human (certainly as much as we
culturally “allow” any physically paralysed person to be), without forsaking
their innate mechanical nature, and as close to being mechanical without
forsaking their innate human-ness. This duality is important, they both are
and are not. If we can
imagine a situation where one more “notch”, however that may manifest
itself, on a vertical scale renders them “human” then the duality will
naturally vanish. Such performer/machines necessarily operate on a cusp between
“is” and “isn’t”, rather like Hoffmann’s Sandman. It is just as much
the case that one more “notch” “upwards” on the mechanical scale causes
an imbalance the other way, rendering the performer a mere machine at the
expense of their innate human-ness. Again, the duality vanishes. If we imagine
robotic theatre in general as a graph with two axes, external control and
independence, each with an equal scale of 0 to 5, then the in extremis example
just described might appear on the graph with co-ordinates (5,5) - high
intelligent, creative independence (which finds an equivalence in Hofstadter
through the manifestation of intelligence, insightfulness, creativity and
flexibility) combined paradoxically with a commensurately high measure of
external intelligent control. It necessarily follows that there are other
co-ordinates “below” and/or to the left of this, and where the duality may
be “weighted” more one way than the other, that is towards external
intelligent control, or towards intelligent independence. “High” external
control does not on its own posit the machine/performer at the apex of robotic
theatre any more than “high” independence on its own might do. An
exaggerated analogy might be a lighted match casually thrown into a firework
warehouse. The ensuing explosion (or display) would have a “high” degree of
independence with a “low” degree of external control. These “values” can
shift by say, making the dropping of the match less casual so it lands where we
know the rockets are, and (all things being equal) they go off first. By doing
this, there is a marginal increase in external control of the display, while at
the same time a marginal decrease of its independence. Of course, the analogy
falls apart as without the match no display would occur at all, which actually
gives a high level of external control. The way round this is to separate the
initiation of the explosion (high external control, low independence) from the
explosion itself (low external control, high independence).
A
good example of this “variable” scaling can be seen in the work of the Swiss
artist, Jean Tinguely (1926- ), whose mechanical sculptures embraced the notion
of chance and surprise. His “Study for an End of the World” (1961),
sited in a park in Copenhagen, was made from various scrap materials, plaster,
fireworks and dynamite. Their self-destruction was carefully orchestrated - in
other words, there was a comparatively high degree of external control - yet,
according to artist and assistant Billy Kluver, there was also a strong degree
of independence:
A
rocking horse rocked wildly; a doll’s pram trundled up; the Russian astronaut
Yuri Gagarin in the form of a broken doll, was shot into space; and, as a
finale, the French flag descended slowly by parachute. The explosions were
sometimes so violent that the audience’s clothes were blown about as if by a
gale-force wind, and rockets flew low over their heads. It had perhaps not
occurred to Tinguely that coating the explosives with plaster would
substantially increase the strength of the blast. (Hulten 1987, p.98)
Sadly,
in an ironic if accidental epilogue to the performance, the dove of peace that
was meant to fly up after the first explosion, missed its cue and was found dead
in the debris. Tinguely’s work, operating as it does on the cusp of sculpture
and performance, provides an exaggerated example of an event where the
machine/performer has a reasonably high level of independence. Suffice it to
say, there has to be a combination of both external intelligent control and
intelligent independence in some measure for what has been described as robotic
theatre to obtain. The focus here is on robotic theatre in extremis, and more
precisely on the theoretical means within the field of AI by which it may be
eventually achieved in a theatrical domain.
Mark
Pauline’s performance group, Survival Research Laboratories (SRL), exploits
machines in a similar fashion to Tinguely, that is, there is a planned sequence
of events, yet chance and surprise are allowed to intervene and provide new
avenues for creativity through improvisation. Thus while there is high external
control (Pauline wears a headset and gives second by second instructions via
colleagues who “drive” the devices), there is also a degree of
machine/performer independence. In the frequently apocalyptic performances,
machines war between themselves yet their fates, and the precise sequence of
events within these fates, cannot be entirely predetermined or planned by the
controllers. This differs from the British TV show Robot Wars, not only in terms
of milieu and scale (SRL’s performances are outdoors and employ such items as
ex-U.S. army rocket launchers) but also in terms of context. There is a
narrative structure and conscious use of metaphor: thus in “Machine Sex”
(1979), a satirical commentary on the oil crisis, “dead pigeons dressed as
Arabs were shredded by a spinning blade while The Cure’s “Killing an Arab”….
blasted at mind-numbing volume” (Dery 1996, p.7); and in “Deliberately
False Statements: A Combination of Tricks and Illusions Guaranteed to Expose the
Shrewd Manipulation of Fact” (1985), the Sneaky Soldier features a
crawling, dying sculpture of a soldier disembowelled by a landmine. Frequently
too the performances are accompanied by Brechtian captions, describing or
commenting on the action[2].
A
variation to these forms of machine performance occurs in Stelarc’s
“Sci-Art:Bio
Robotic Choreography”, currently being developed by Nottingham Trent
University’s Digital Research Unit. Man and machine simultaneously co-perform
in a creative relationship, and yet each enjoys a significant degree of
independence tempered at times by an equally significant degree of external
control. While Tinguely and SRL stand apart from the performance, Stelarc
is within it. His control of the large, six-legged robot (Locomotor) from a
platform centrally located in its body is as partial as the machine’s control
of itself. In other words, there is a high degree of creative allowance. As
Stelarc himself remarks:
The
body is not merely a passenger on the robot. The smart robot design will result
in a more subtle interface. The robot’s mode of locomotion, its direction and
speed are actuated by the shifting of the body’s weight and the twisting of
the torso… the body becomes a split body, sometimes automated,
sometimes involuntary and always experiencing a split physiology. (Stelarc 2001)
It
is also interesting to note that the philosophy behind the development of
Locomotor has much in common with that which is behind Meyerhold’s and
Schlemmer’s models: to extend human movement (and thus expression) beyond its
limits by means of marriage to machine. This precipitates a re-reading of not
only what it means to be human, but also what it might mean to be a machine. It
should be noted that this blurring of the boundaries is not without its critics,
as Donna Haraway writes in “Simians, Cyborgs and Women: the Reinvention of
Nature”:
Late twentieth-century machines have
made thoroughly ambiguous the difference between natural and artificial, mind
and body, self-developing and externally-designed, and many other distinctions
that used to apply to organisms and machines. Our machines are disturbingly
lively, and we ourselves frighteningly inert. (Haraway 1991, p.152)
Haraway
occupies a late twentieth century position redolent of Carlyle’s one hundred
and fifty years before, but with a difference: while humans become mechanical
adjuncts or drones, machines become organic in form and nature, the mechanised
organism described by Schlemmer.
While
SRL’s machine/performers have some independence, they have little sense of
their location in respect of each other. Ullanta’s performers differ in this
regard, and are a little closer to machines that can think. In “Robotic
Love Triangle”
the integration of a video camera permits physical recognition or visual
consciousness and the scaling of distance between the performers themselves as
well as between the performers and the audience members. Ullanta’s
robots, whose prime task is to dispense snacks to the audience (as guests) on
request, are also programmed to alternate at fixed intervals between displaying
social and anti-social behaviour. As programmer/director Barry Brian Werger
writes:
In
2 of three 30-second cycles, each robot would be “social”, seeking the
company of other robots and serving humans, but in the third, it would become
“antisocial”, avoiding robots and not acknowledging humans. The cycles were
initialized so that a repeating pattern emerged: one robot would storm out as
the others became intimate, then return affectionately as another left.
Uncertainty of the real world would result in numerous variations, such as
social robots running after antisocial ones instead of becoming involved with
the other social robot. The guests, informed by signs on the robots that the
love triangle not only caused the robots emotional strife but interfered with
their service, were asked to step between the robots if they entered into long,
intimate gazes with each other. This request had the effect of both drawing the
audience into the drama and triggering the obstacle avoidance to move the robots
into more effective service configurations….
The
guests both followed and entered into the robots’ interactions, and on the
occasions where one robot would get separated from the others, concerned guests
took it on themselves to clear a path for it to regain visual contact.
Enterprising children discovered that the color of a cheese served by another
robot was similar to the neon orange of our robots’ distinguishing marks and
delighted in leading the robots around like hungry dogs. (Werger 1998, p.35)
Ullanta’s
performance differs from Tinguely’s and SRL’s in key respects: not only does
the machine/performer have a physical consciousness of others in that it
recognises walls, people and other robots as obstacles and is programmed not to
bang into them, it also interacts with other machine performers, directly and
indirectly affecting changes in physical behaviour and the consequent pattern of
relationships. Furthermore the audience itself is permitted and encouraged to
play a part in the actions and is as responsible as the robots for the sequence
of events, or narrative, that unfolds. However, while Werger’s external
control is interestingly vulnerable to the external control of the audience, the
apparent independence of the robot itself is largely illusory, that is, its
actions stem from carefully programmed givens (for example, the three thirty
second cycles) and there is little or no place for a genuinely emergent
behaviour. This is not to carp at a seemingly effective and entertaining piece
of robotic performance; it is more to acknowledge that for a relationship to
exist between programmer/director and machine/performer that conforms to robotic
theatre in extremis, the machine/performer must enjoy a higher degree of
cognitive independence than Werger’s programming seeks or permits.
Finally,
some mention needs to be made of the Shadow Robot Company’s Shadow Biped Robot
(figure 1, see Shadow Robots), a machine whose movements conform closely
to human movements, and in so doing avoid the clunk factor that is predominant
in other industrial counterparts. I have been struck by the Shadow Biped’s
characteristic fluidity, wrought by largely mechanical means. Designer Richard
Walker describes this as “soft robotics”.
In
simple terms, this is achieved by the use of air muscles, components that expand
and contract by means of compressed air. The wooden skeleton, with its
approximation of human limbs, houses a number of these muscles, which seek to
replicate as closely as possible the qualities of human movement. Whilst Shadow
Robot’s research has yet to solve the problem of getting the Biped to walk,
with balance after the first step notoriously difficult to achieve, they have
had success with arm and hand movements. Like Ullanta’s robots and Stelarc’s
Locomotor, these have a degree of self-control due to the incorporation of
pressure sensors. This allows the Biped with its complex hand/arm (figure 2,
see Shadow Robots) to pick up a glass of beer (for example) without crushing
it.
Figure 1 |
|
Tether Interface
Card Sensor
Interface A
/ D Converter card Output
Interface Pressure
Sensor Gauges Modified
Bourdon Tubes producing electrical output. Control
Valves Actuators Using
Shadow ® Air Muscle technology. Joint
Sensor Figure 2 Figure 1 |
The
robot, which has yet to be tried in a performance context, can either be
pre-programmed or operated live via a handset, and has the advantage of being
comparatively cheap to build and reliable. While it is largely incapable of
independent action, it is nonetheless a useful example of a machine that moves
with the fluency of its human counterpart, whilst retaining its innately
mechanical nature. What is now needed is the discovery of a cognitive system
that is similarly on the cusp of human and machine, and one that manifests the
capacity for a high degree of independence.
Machines that Think
As
I have already indicated, the interest is less in physical expression and more
in spoken expression and the cognitive processes that govern it. The focus now
turns to the theoretical means within the AI field by which robotic theatre in
extremis may be eventually achieved in a theatrical domain. Put simply,
in what way might one conceive of machines and machine/performers that can think?
Hofstadter’s
presence here, along with the Fluid Analogies Research Group, is important in
that through the microdomains of computer programmes such as Jumbo, Numbo,
Copycat and Tabletop, taking into account as they do notions of analogy,
concept, perception and consciousness, he establishes, unlike many others
working in the AI field, a theory for self-reflexive and genuinely emergent
artificial intelligence which matches (on a physical/intelligent axis) the
models of external control and independence intimated by Meyerhold and Schlemmer
and which will form the basis of robotic theatre in extremis. Hofstadter’s
work cuts against what Boyle identifies as the Quantitative element of “new”
science - that is, the perceived primacy of mathematics and regularity, which
dictates that only areas displaying such facets are explored. This axiomatic
quality is reflected in the bedrock of much AI research and application, “Laws
of Thought” (1854) by George Boole, whose assertion that not only is
mathematical structure inherent within intellect and reason but that it can be
logically analysed and symbolically represented by algebraic means, is a
paradigm integral to Claude Shannon’s chess-playing machine (1950) and Ernst
and Newell’s General Problem Solver (1969), as well as a host of traditional
AI programmes. Such programmes, governed as they are by a Quantitative overview,
tend to stress external control whilst overlooking the qualities of
independence, and contrast strongly with Hofstadter’s “epiphenomenal”
model, with its emphasis on naturally emergent intelligence. As Hofstadter
writes in “Fluid Concepts and Creative Analogies, Computer Models of the
Fundamental Mechanisms of Thought”:
The
philosophy….comes from an analogous vision…(it) goes against the grain of
traditional AI work, which seeks to find explicit rules (not emergent or
statistical ones) governing the flow of thoughts. (Hofstadter 1998, p.125)
Hofstadter
establishes the necessity for models of conceptual processes to be undertaken
alongside models of perceptual processes, the latter being achieved by a study
of analogy-making. The one cannot be separated from the other in trying to come
to a better understanding of the human mind. With analogy-making the touchstone,
the precepts on which such programmes as Jumbo are based are striking in that
they are more biological and organic, less deterministic and axiomatic, allowing
as they do room for emergent intelligence (the equivalent of our theatrical
independence) rather than an imposed system of formalised laws (equivalent to
our external control). At the heart of the models is the characteristic of
fluidity, or shiftability, which allows problem-solving or pattern-recognition
tasks to be carried out dynamically, with the programme independently free to
discover its own affinities and determine its hierarchical structures. This
fluidity, as has been seen, corresponds to the nature of our creative
relationship between machine/performer and director/programmer, as well as to
the individual machine/performer’s innate creativity.
Where
Taylor and the Gilbreths, in the field of time and motion study (see
Barnes 1968), and Schlemmer
and Meyerhold, in theatre, attempt to break down physical activity and
expression into a series of smaller events and actions, Hofstadter similarly
breaks down cognitive processes into a series of smaller cognitive events,
actions, relationships and reactions. To develop an architecture for expression
in robotic theatre, some understanding and interpretation of the flexible
structure and spirit of Hofstadter’s cognitive architecture would seem to be
essential. The next section explores cognitive structures.
Jumbo:
A Cognitive Architecture
Such
terms as “biological” and “organic” have a tendency to sound woolly and
we need to look at them further. The activities and relationships of the
“Codelets” and “Coderack” which help form Jumbo’s bonding architecture
are deliberately analogous to the activities and relationships within and around
cellular, molecular and cytoplasmic structures. Some form of hierarchy can be
established, with atoms (low level molecules), followed by small molecules,
amino acids, and chains of amino acids, all with various strengths of bond.
Furthermore, within the cell and around its nucleus is the cytoplasm, within
which the molecules are built, each with its own construction process. Jumbo is
based on the popular word puzzle where letters have to be unjumbled and
rearranged to make sense. Hofstadter relates the grouping and bonds of the
molecular paradigm to individual letters (atoms); “tight consonant clusters”
(small molecules, “th”, “ng”, “ck”); “higher-level clusters”
(amino acids, “thr” from “th” + “r”, “ngth”, “cks”); and
syllables (amino acid chains). External pressures can break a biological chain
and language bond theoretically anywhere, though as Hofstadter points out:
its
natural breaking-points will be between
(my italics) the highest-level constituents rather than inside them.
(Hofstadter 1998, p.101)
The
process of Jumbo’s verbal pattern-making and pattern-breaking closely reflects
the very basis of human life at the cellular level - the arrangement and
rearrangement of molecular bonds, with each operation carried out via a
particular enzyme or agent. Each successful (anabolic) molecular bonding the
agent achieves (they can break bonds too, in which case this would be catabolic)
is accompanied by a chemical “marking” which further attracts the attention
(like a marked lamppost attracts a dog) of another enzyme looking for somewhere
to join its “own” molecular bonds onto. Jumbo’s architecture gradually
takes shape effectively but with apparent randomness, until the edifice is
complete. The analogy of a building site, to the untrained observer an
apparently chaotic scene with holes, piles of rubble, stacks of bricks,
footings, reinforcements, foundations, scaffolding, pipes, cables, concrete -
teeming with bricklayers, foremen, labourers, plumbers, surveyors, electricians,
carpenters, decorators and a riot of machines (dumper trucks, diggers, cement
mixers, lorries, cranes), all involved in different but to varying degrees
related tasks is actually a good one, vis à vis the difference between the
Analytical and what we could call the Organic approach.[3]
Boole and Descartes, watching the overall site activity from a bus stop across
the road, see no logical regular pattern that adheres mathematically to their
precepts, and determine that no building is being built. They get on a bus and
leave, failing to identify a building process that emerges by means of methods
that may appear chaotic and at the time deterministically un-representable but
which are in fact both effective and reliable. As Hofststadter comments:
One
could therefore bring in a third basic analogy relevant to Jumbo’s
architecture: that of statistical mechanics, which explains how macroscopic
order (the large-scale deterministic laws of thermodynamics) emerges naturally
from the statistics of microscopic disorder. (Hofstadter 1998, p.125)
It
is this disorder, in essence non-deterministic, and termed by Hofstadter as
“stochastic”, with its concomitant flexibility that allows what we could
call the “affinity-gauging” of letters and letter groups in the programme.
Values or attractiveness ratings are intuitively ascribed by Hofstadter to
various clusters - so, for example, in the data table (see figure 3, (Hofstadter
1998, p.104)) “sm: initial 5, final 2”, means
that “sm” has a likelihood rating of 5 to act as the beginning of a
syllable, for example “smog”, while to act as a final cluster it merits a
likelihood rating of 2, for example “spasm”:
sc:
initial 2
sh:
initial 8, final 4
sk:
initial 4, final 4
sl:
initial 5
sm:
initial 5, final 2
sn:
initial 2
sp:
initial 4, final 2
s-ph:
initial 2
sq:
initial 3
ss:
final 5
st:
initial 8, final 4
str:
initial 3
sw:
initial 3
oa:
initial 2, middle 4
oi:
middle 4
oo:
middle 5, final 2
ou:
initial 2, middle 3
ow:
middle 3, final 3
oy:
final 3
Figure 3
Such
scoring is entirely subjective and does not therefore conform to Analytical
procedure. Instead, it conforms to Hofstadter’s psychological predilections
for particular letter and cluster matches, and this imbues the programme with a
deeply “human” nature. Its architecture depends on “sparks” of affinity
- of varying brightness or strength - in practice represented in code by a
codelet (resident in a coderack from which it can be plucked and in which it
subsequently installs potentially a whole “amino-acid” chain of
replacements), which signals promising letter-relationships with a “flash”.
If the relationship is unpromising, each letter is free to form associations
with other partners instead. What determines which codelet is to leave the
coderack at any given moment is its urgency value, an urgency governed by
statistical probability, but a probability that ultimately has no overall power
to force any given codelet into action. Put simply, this allows for the fact
that sometimes - as humans - we get things wrong. A good analogy might be
Manchester United facing a two-one defeat with five minutes to go in the
European Cup Final. Goals are needed and the two on-field strikers, Van
Nistleroy and Scoles, are tired. With only two of three substitutes left on the
bench free to be used - two of them strikers (Forlan and Solskjaer), one a
defender (Neville) - the probability is that manager Alex Ferguson will simply
swap the strikers, new for old. In most scenarios, this is what he does. Both
men score and Manchester United lift the Cup. Occasionally the pattern will run
differently, and Ferguson will replace only one striker, preferring - because it
is equally important not to concede another goal - to replace the tiring and
injured Ferdinand with Neville. The urgency to score is thus challenged by the
urgency to defend. As it happens, it’s a “bad” decision. The substitute
striker fails to score and Watford become European Champions instead. Of overall
importance is that the codelets have at any one time, like the football players,
left their mark on the proceedings, their indisputable effect on the cytoplasm.
Their presence and activity have contributed to a solution or result.
To
deny or ignore the organic complexity inherent in cognitive processes is to
encourage what might be called the ‘clunk factor’ in robotic cognition and
expression. The clunk factor, with its associations of predictability and
regularity, disallows emergent thought and confers upon machine or programme a
status of low independence and high external control. Meyerhold’s “innate
capacity for reflex excitability” (Braun 1991, p.199) seems to run counter to
this and occupies a position that is extremely similar to Hofstadter’s model.
That is, in certain conditions, the machine or mechanised performer has capacity
for genuinely independent and automatic cognition and expression.
Non-Emergent
Alternatives
It
would be a mistake to assume that all AI programmes share this characteristic.
On the contrary, this “creative emergence” contrasts strongly with the
much-vaunted claims of artificial intelligence in other programmes. The Bacon
programme (Langley et al.,1987) is a
case in point, concerned as it is with the re-discovery of famous scientific
laws and the cognitive means by which they arose. Of this Herbert Simon in “Machine
as Mind” writes:
These
successes in simulating scientific work put high on the agenda the simulation of
other facets of science (inventing instruments, discovering appropriate problem
representations) that have not yet been tackled. (Simon 1996, p.99)
Simon
over-eggs the pudding, ignoring the fact that the “appropriate” data is fed
in along with “appropriate” structural frameworks, thus automatically
leading the programme in predetermined directions. Creativity is “enforced”,
a contradiction in terms, rather than emerging stochastically, a process
analagous to my giving a classroom of students dot-to-dot representations of
bananas, then congratulating them on their completion of same with: “Well
done, you’re very clever - you got good representations of bananas all on
you’re own, without any help from me!” In fact, given a basic understanding
of number sequences, they were able to arrive at these representations with
minimal creativity and without any necessary grasp of the concept of “a
banana”. I have effectively set them a task at which they are not only bound
to “succeed” but to which I can (falsely) assign them roles reflecting
maximum intelligent independence and minimum external intelligent control. In
Bacon, the data given to the programme were as ordered, consistent and logically
convergent as the actual data from which the original momentous scientific
discoveries were chaotic, contradictory and disparate (Hofstadter 1998, p.177).
Another
programme reflecting the Quantitative givens of Bacon is the analogy-based
Structure Mapping Engine (Falkenhainer, Forbus, and Gentner, 1990), which maps
correlations between atoms and the solar system. Again, the structures and their
representations are already packaged and presented, rendering any necessity for
the programme to develop analogies for itself unnecessary. Only the particular
relations sought to “prove” the analogy are given, thus while a
“revolve” relationship is identified both between planet and sun and
electron and nucleus, no attempt is made to identify any characteristic on
either side which does not find an equivalent. For example, since moons revolve
round planets, what is the equivalent relationship within the atomic model? None
is offered because no comfortable analogy can be made. As Hofstadter writes:
It
comes as no surprise, in view of the analogy sought, that the only relations
present in the representations that SME uses for these situations are the
following: “attracts”, “revolves around”, “gravity”,
“opposite-sign” and “greater”….These, for the most part, are precisely
the relations that are relevant factors in this analogy. (Hofstadter 1998,
p.183)
Such
programmes, whatever their inherent strengths and weaknesses, are nonetheless
inseparable from the programmes of Hofstadter, if only by virtue of their
ambitions. All attest themselves to be models of at least some aspect of human
cognition, and as such they command attention as regards their suitability as
potential models for artificial intelligence in general and robotic theatre in
particular.
Tabletop
French and Hofstadter’s Tabletop (1995) inhabits a conceptually complex
and more “slippery” territory than Jumbo. Again, analogy is central,
offering a window to concept and perception, aspects of human cognition that
Hofstadter (as Kant before him, (see Harvie 1970)) identifies as
inseparable. High-level perceptual processes, on which analogical thought
depends, are at the heart of human cognitive ability, for which appropriate
representations are required. One’s perception’s representations, without
recourse to conceptual influence, will be rigid and unable to adjust to context
(Hofstadter 1998, p.192). For example, without a grasp of concept at the same
time as perception, the phrase “The waiter served coke” may possibly mean he
served a solid fuel or he served a narcotic. Encouragement to “allow” an
audience their concepts to slip and contexts to slide is plainly inherent in the
domain of surreal and/or comic creativity.
Analogising
requires the perception that some of the aspects of both situations are equal.
The statement “alien toys are the new trolls” uses “trolls” as a
representational tool with which to grasp something of the essence of how we
feel about “alien toys”. They are not interchangeable. They are different in
appearance and size and are made from different materials but we perceive
equality (or a high degree of similarity) in the fact that they are collectable
(you can make up sets or “families”), inexpensive and readily available.
That is, we identify overlapping conceptual attributes, to convey meaning.
Hofstadter divides analogical thought into situation-perception, “taking the
data involved with a given situation, and filtering and organising them in
various ways to provide an appropriate representation for a given context” (Hofstadter
1998, p.180); and mapping, the process of overlapping, as seen with the aliens
and trolls, where the first does not necessarily require the second, but the
second requires the presence of the first.
In Tabletop, Hofstadter and French use variously positioned items -
cutlery, cups, glasses etc. - on opposite sides of a coffee table, to form
analogies based on locational, perceptual and conceptual values, a process
involving the stochastic emergence of choice with the concomitant pressures and
urgencies associated with Jumbo. Thus, if Henry has a cup on the right hand side
of the table, and Eliza has a similar cup in a corresponding position, on her
left hand side, then if Henry touches his cup, Eliza is faced with two obvious
choices if she is to follow Henry’s “Do what I do” instruction. She may
literally touch Henry’s own cup (what Hofstadter would term a shallow or
superficial analogy) or, touch her own cup (a deeper analogy).
A
third, even deeper analogy might involve Eliza touching the empty space opposite
Henry’s cup, to her right. The interaction of concept and perception is
dramatised if we take Eliza’s cup away and replace it instead with a glass.
Eliza now experiences a perceptual pressure to touch something that has a
locational correspondence to the only other item on the table (the glass) whilst
at the same time experiencing a conceptual pressure to touch something of the
same object-classification (a cup). If she is not to yield to the superficial
analogy of touching Henry’s cup, her solution is to touch the glass. This
“mapping” depends on her finding appropriate correspondences. For example,
her glass is a drinking vessel (like the cup) and she assumes it is “hers”
as opposed to “Henry’s” (it is on her side of the table - even though the
table has no line across it, she infers one, thus making use of concepts such as
“territory” and “ownership”). The analogies can be made more complicated
and the pressures and urgencies more strained. One might imagine changing
Eliza’s glass for a fork. What then of object-classification?
Here
high-degrees of cognitive latitude and flexibility are essential. What is
important is that the architecture of Tabletop, apart from requiring usage of
conceptual and perceptual processes, allows itself a flexibility and variation
of approach, strongly reminiscent of a human counterpart. This is evidenced by
the programme running a particular tabletop “problem” or configuration fifty
times, with the resultant statistical analysis demonstrating a variety of
solutions (of varying complexity and represented in terms of “structure
values”) and displaying what Hofstadter and French deem to be a human
propensity to reach for the easiest, or most superficial, solution more
frequently than for the more complex (Hofstadter 1998, p.392). The particular
component that aids the inherent dynamicism is the “parallel terraced scan”,
which permits possible solutions to be simultaneously
sought on the basis of probability or likelihood:
The first codelets that run are thus inspired by the touching-action, and
they scan the table in a biased manner, giving probabilistic preference to
certain areas of the table…. There are codelets that look directly across the
table from the object Henry touched to see what, if anything, is there. Other
codelets look diagonally across the table….(Hofstadter 1998, p.386)
Thus
the programme is biased in the routes its codelets take, forsaking “across the
board” (Hofstadter and French term these “egalitarian”) forays for more
promising routes or correspondences, and ascribing each codelet an “urgency”
value. The programme’s capacity for a growing awareness of what it is, its
self-consciousness, depends to a large extent on the measure of its own
coherence (by means of the aforementioned structure values), as set against the
time it takes to reach a solution.
Tabletop’s sophistication lies in its ability, by means of
situation-perception and mapping, to recognise and establish correspondences in
environments that are considerably less clear-cut than the territories explored
by its predecessors, Copycat and Jumbo. This “mushier” terrain, as
Hofstadter calls it, is closer to the areas where high-level human perceptual[4]
and the accompanying conceptual cognition operate, and requires a programme that
is both dynamic and self-regarding, with a capacity to explore and evaluate
avenues of thought, choosing the more likely while at the same time disregarding
the less promising. Tabletop’s importance lies not only in the fact that it is
a working example, albeit on a micro-level, of the sort of artificial
intelligence that might be developed for future use in robotic theatre in
extremis, but also in the fact that it provides us with a useful theoretical
model with which to explore further the elements and degrees of independence,
external control, control and absence regarding the machine/performer and the
director/programmer. Specifically, it offers us the integrated bottom-up and
top-down parallel dynamic, in conjunction with a capacity for data
untouchability, with which we can re-evaluate, and if necessary adjust, the
theatrical dynamics, and their corollaries, of independence and control.
Hofstadter’s
definition of bottom-up and top-down processes arises from his Seek-Whence
programme (1983):
“Bottom-up” here describes perceptual acts that are made very locally
and without any context-dependent expectations; “top-down” pertains to
perceptual acts that attempt to bring in concepts, and to extend patterns, that
have been noticed. (Hofstadter 1998, p.63)
Whilst
the “bottom-up” initiates the perceptual process, the “top-down”
influence gradually and increasingly impinges on it, until a “solution” is
reached. Thus the two processes run in parallel and can be said to be
integrated.
Bottom Up, Top Down Processes: A
Theatrical Equivalent Between Machine/Performer and Programmer/Director?
This
can be given an approximate theatrical equivalence that can be illustrated by
re-creating a rehearsal for a small scene where the machine/performer, in
“complete” darkness and in a state of physical paralysis, must say the line
“He’s dead.” The superficial intention of making these restrictions is to
limit expression as far as possible to the purely vocal. Darkness helps in this
regard, but because it is usually incomplete (one can, eventually, see in the
dark) it is supported by absolute physical stillness, an “absolute”
difficult to fully achieve with a human performer, but obviously much more
likely to be realised by the machine/performer. This disregard of the panoply of
elements that make up the performance text (a comprehensive summary of which can
be found in Bennett (1990)) is not to devalue their importance, but is made for
a deeper theoretical reason, namely to narrow the focus in order to create a
microdomain, by which the processes we are seeking to understand and extrapolate
may be more easily grasped. This is more likely to occur through the diminution
of low-level perceptual activity, in this case, the sensory deprivation of
vision, and a commensurate concentration on higher-level perceptual activity,
the expression and comprehension of the words.
To return then to “He’s dead”. The machine/performer, when
presented with the line for the first time, may well struggle to find a way to
say it that they, or the director/programmer, is entirely happy with. Indeed, it
would be strange if this were not the case, particularly if the line arrives (as
it has) on its own, without any theatrical context or “story” to it. The
performer’s only real and somewhat desperate recourse is to “fire off” a
variety of vocalisations and hope that after a while, they will “hit” the
right one and the director/programmer will call a halt to the proceedings. This
will be unusual. If the “scene” is a “scene” and not an exercise whereby
the performer has to guess the context from which the line has sprung (in other
words, if it is a “story” or part of a “story”, if it has been or is
going “somewhere”), the performer-dependent scatter-gun approach will not
suffice. A context is plainly required, in this case one provided (at least
partially) by the director/programmer - for simplicity, the possibility of the
machine/performer largely supplying this themselves is ignored. The
vocalisations/data now have a source of concepts to latch onto or try out, and
the director/programmer consequently a “context-dependent expectation”. If
the machine/performer is involved in a bottom-up process, the
director/programmer may be said to be involved in a top-down process, some way
from the perceptual integration that Hofstadter has established as a cognitive
requirement for creativity and problem-solving.
If the director/programmer
supplies more “information”, for example, that it is Tom, her character’s
partner who she has loved all her life, who is dead, and invites her to try the
line again. Almost immediately the performer is able to latch onto the concept
of “grief”, and vocalise accordingly. This “marking”, whilst not being
entirely successful, is a good deal more accurate than the previous (unaided)
efforts and provides herself and the director/programmer with a bearing against
which new attempts can be made. For example, “more” context is provided by
the information that, say, Tom had been increasingly preoccupied by a sense of
impending doom. This extra context now gives the initial “grief” a more
distinctive flavour when the line comes again, and another bearing is plotted
for further work still. The mapping of these points is essential to an
integrated bottom-up/top-down process, where the “solution” being sought is
ultimately represented by the appropriate positioning of a given bearing.
Since
there is a duality at play here, with the two dynamics merging into one
“solution”, we can look at the “grief” bearing as the result of two
co-ordinates whose “values” represent on the one hand, the top-down concepts
and contexts of the director/programmer, and on the other, the bottom-up data of
the machine/performer. However, to stretch the perceptual process and robotic
performance process analogy further and give it a higher structure value one
might reverse the attributions. The experiment suggests that the bottom-up
process can be ascribed to the director/programmer and the top-down process to
the machine/performer, with data attaching to the first and concept and context
attaching to the second. This can plainly occur by the performer expressing the
“grief” in such a way as to cause an altogether different concept to take
shape in the director/programmer’s scheme. Suddenly, for example, within the
same context, the machine/performer may express “He’s dead” with an
unexpected element of joy, a moment that may radically reshape the
director/programmer’s expected patterns and sequences. Hofstadter’s system
plainly allows for this flexibility, which we can accommodate by confirming that
the roles are interchangeable (to varying degrees, according to the contracts we
mentioned earlier) whilst the twin processes remain as constants. As with the
independence/control model, there is mutual creativity, conferred here by a
“slippability” of status, though with the caveat (again) that there is a
“final say”. This is defined by Hofstadter in his discussion of Suber’s
Nomic game (1982) as “untouchability”; and, as with “final say”,
“untouchability” is not always untouchable:
Now,
for the ultimate in flexibility, none of these levels should be totally
untouchable….any recognition programme must have at its core a tiered
structure….in which there are levels that are “easily mutable”,
“moderately mutable”, “almost mutable” and so on. (Hofstadter 1985,
p.86)
This
capacity appears in Tabletop’s architecture inside the Workspace, where the
“highly untouchables” inhabit the Worldview, an inner sanctum where “élite”
perceptual structures reside, but from which they can, if necessary, be demoted.
An important point here is that the creative act in which our duality of
machine/performer and director/programmer is involved paradoxically depends as
much on a potential for temporary “destructions”, or the breaching (and
surrendering) of the defences of “untouchables” and “final says”.
In the analogising between Tabletop and the independence/control model correspondences only go so far. However, Hofstadter’s Tabletop and Jumbo confirm the inherent qualities of flexibility and self-reflexivity that accompany the cognitive processes of creativity and problem solving. Specifically, the variable scalings of independence, external control, absence and control that pertain to the dynamic relationship of the machine/performer and the director/programmer in robotic theatre in extremis bear a strong resemblance to the top-down/bottom-down subcognitive dynamic which represents the process of the individual’s thought. Furthermore, examination of the latter’s tendency to independently re-ascribe status according to variable pressures and urgencies suggests that the “control” status of the machine/performer and the director/programmer are equally variable, according to similar creative pressures and urgencies. Above all, Hofstadter’s work is evidence of a dynamic, organic artificial intelligence that can be appropriated in time to form the basis of robotic theatre in extremis
Bibliography
Barnes, Ralph M., (1968) Motion and Time Study:
design and measurement of work. New York; London: John Wiley and Sons.
Bennett,
Susan, (1990) Theatre Audiences:
a theory of production and reception. London:
Routledge.
Boyle, Charles, Wheale, Peter, Sturgess, Brian, (1984) People,
Science and Technology: a guide to advanced industrial society. Brighton:
Wheatsheaf.
Braun, Edward, ed. (1991) Meyerhold on Theatre. London: Methuen
Drama.
Dery,
Mark, (1996) Escape Velocity: cyberculture
at the end of the century. London: Hodder & Stoughton.
French,
Robert, M., (1996) Subcognition and the Turing Test. In: Millican, P.J.R.,
Clark, A., ed. The Legacy of Alan Turing: machines and thought. Volume 1. Oxford: Clarendon Press, (1996), pp. 11-26.
Haraway,
Donna, (1991) Siminas, Cyborgs, and Women: the reinvention of nature. New
York: Routledge.
Harvie, Christopher, ed., Martin, Graham, ed., Scharf,
Aaron. ed. (1970) Industrialisation and Culture 1830-1914. London:
Macmillan for the Open University Press.
Hoffmann, E.T.A., (1969) Selected Writings of E.T.A.
Hoffmann Vol.1. Chicago; London: University of Chicago Press.
Hofstadter,
Douglas, (1985) Metamagical Themas:
questing for the essence of mind and pattern. London: Viking.
Hofstadter,
Douglas, et al. (1998) Fluid Concepts and
Creative Analogies: computer models of the fundamental mechanisms of mind and
thought. London: Penguin
Books.
Hulten,
Pontus, (1987) Jean Tinguely: A
Magic Stronger Than Death. Milan: Bompiani.
Schlemmer, 0., Moholy-Nagy, L., Molnar, F., (1971) The
Theater of the Bauhaus. Middletown, Connecticut:
Wesleyan University Press.
Shadow
Robots, Biped. URL: http://www.shadow.org.uk/projects/biped.shtml
[1999]
Shadow
Robots, Dextrous Hand/Arm. URL: http://www.shadow.org.uk/products/hand.shtml
[1999]
Simon,
Herbert, (1996) Machine as Mind. In: Millican, P.J.R., Clark, A., ed. The
Legacy of Alan Turing: machines and
thought. Volume 1. Oxford:
Clarendon Press, (1996), pp. 81-102.
Stelarc,
Bring on the Dancing Robots URL: http://www.hero.ac.uk/culture_and_sport/bring_on_the_dancing_robo976.cfm.
[2002]
Werger,
Barry Brian, (1998) Profile of a Winner: Brandeis University and Ullanta
Performance Robotics’ “Robotic Love Triangle”. AI Magazine, 19(3).
Whitby,
Blay, (1996) The Turing Test: AI’s Biggest Blind Alley? In: Millican,
P.J.R., Clark, A., ed. The Legacy of Alan Turing: machines and thought. Volume 1. Oxford: Clarendon Press, (1996), pp. 53-62.
[1] The focus is on the relationship between director and performer, as opposed to that between performer and audience. As with Hofstadter’s “microdomain”, this is for the sake of analytical simplicity. A largely improvised “unscripted” performance, the material of which may be derived from or dependent on the responses of performer and audience (as occurs, for example, with stand-up comedy), opens up a fascinating but necessarily more complex dimension, which is not examined here.
[2] It is interesting to note
that both SRL and Tinguely seem to choose destruction as a dramatic outcome.
This might be due to a desire to shed associations of predictability and
regularity that are frequently attached to the machine. As Hulten writes:
“the polar opposite of repetition is destruction; it is only natural that
destruction should represent a powerful attraction and temptation for people
whose work is of a repetitive nature” (Hulten 1987, p.68).
[3] It is interesting to note that the notion of architecture is also a dominant preoccupation of Schlemmer and the Bauhaus. In spite of Schlemmer’s desire for mathematical rigour, his Schaubűhne or visual stage betrays something of an organic flexibility, and he terms it a “mechanistic organism.” (Schlemmer 1971, p.22)
4 “Low-level perception….involves the early processing of information from the various sensory modalities. High-level perception…involves taking a more global view of this information, extracting meaning from the raw material by accessing concepts, and making sense of situations at a conceptual level.” (Hofstadter 1998, p.169)