Sunteți pe pagina 1din 8

KM frontiers: Self-signifying

knowledge
In the first of a series of articles on self-signifying knowledge, Jan Wyllie
discusses the philosophy of interpretation and narrative.

“What the overemphasis on the idea of content entails is the perennial, never
consummated project of interpretation.”

“Like the fumes of the automobile and of heavy industry which befoul the
urban atmosphere, the effusion of interpretations … today poisons our
sensibilities.”

Susan Sontag, Against Interpretation, 1964

Against interpretation 2.0


Both the arts and the sciences have come to be dominated by an endless
process of interpretation and reinterpretation by more and more people with
biases and vested interests. For the purposes of this article, interpretation is
not used in the broadest sense, in which all sensory input is assumed to be
interpreted and given meaning by the brain. In this sense, all facts are
interpretations.

It is used in Susan Sontag’s sense of, “… interpretation means plucking a set


of elements (the X, the Y, the Z, and so forth) from the whole work. The task
of interpretation is virtually one of translation. The interpreter says, Look, don’t
you see that X is really – or, really means – A? That Y is really B? That Z is
really C?”

What is it that is being interpreted? It is content, whether in the form of


speech, image, text, even music. Content is king. Its interpretation is deemed
to be the way towards discovering the truth (or not). The ever-present danger
is that by finding and interpreting the content that we agree with – that is, the
content that fits our own preconceived knowledge – sources of information
can become seriously discredited and their value diminished. The danger
becomes even greater when propagandists with vested interests enter into
the discourse.

This kind of echoing process, which takes up so much time and effort by
knowledge workers, leads to what Dave Snowden, founder of Cognitive Edge
calls “entrainment”, often blinding them to the significance of new knowledge
and/or experience. It was Karl Popper’s insight that scientific theories can
never be proved; only falsified. For Popper, the high rate with which scientific
theories are found to be untrue is a sign of the strength of the scientific
method, not a weakness.

Business and government cadres are entrained to manage events by


interpreting what they mean based on a mysterious combination of feeling
and rationality in order to make the right decisions. Most management
systems – from divine-right monarchy and democracy to Taylorism and
business process re-engineering – tend to be blinded by their assumptions
and prejudices to significant changes in the ‘real world’, lost in the smoke and
mirror world of interpretation.

The analytical tools of cause and effect can be all too quickly swamped by
complexity, making them a very dangerous basis on which to determine what
is significant and thence to make decisions. The consequences are a plethora
of misjudgements and mistakes contributing to the environmental and
economic devastation that we see almost everywhere we look.

The purpose of the traditional practice of information and knowledge


management (KM), not to mention library science, is to provide access to
content, so knowledge seekers can find the pertinent information to interpret
for their purposes.

Although a great deal of ‘progress’ may have been achieved in certain


domains by traditional knowledge content storage, retrieval and interpretation
processes, it has been at great cost in the wider context of economic and
environmental sustainability. It is also arguable that the main achievements
resulted from intuition (seat of the pants) and serendipity (luck) rather than
from the enormous infrastructure of interpretation operating in business,
academia and government.

Nowadays, though, with the information explosion brought about by the


internet, a realisation is beginning to dawn that traditional KM systems are
becoming overwhelmed by the toxic combination of volume and complexity.
Nobody can possibly gain the crucial bigger picture view by reviewing and
interpreting all the pertinent content, even if it was served up by the perfect
mind-reading retrieval engine. There are just not enough hours in the day or
years in a lifetime.

Google and its ilk are increasingly highlighting the issues of information
overload and knowledge incoherence as increasingly acute and unsolved
problems. Twitter, with people following each others other’s tweets in the tens
of thousands, is turning it into a crisis, while pointing towards an adaptation,
which some are even saying is a new reflective faculty of human
consciousness.

Self-signifying data requirements

“If excessive stress on content provokes the arrogance of interpretation, more


extended and more thorough descriptions of form would silence. What is
needed is a vocabulary – a descriptive, rather than prescriptive, vocabulary –
for forms.”

Susan Sontag, Against Interpretation, 1964


What if people are missing out on a potentially new knowledge perspective
where useful intelligence could be extracted not from the interpretation of
content, but by making inferences from information flows? Intelligence in this
context would consist of significant information, which would otherwise have
been missed, delivered in as close to real time as possible. It is knowledge
that otherwise could not be known, whatever the amount of interpretation by
any number of people.

Significance would no longer be hidden in the content, but would emerge from
the patterns of content without interpretation (a process Dave Snowden calls
“disintermediation”). Meaning would no longer be exclusively hidden in
content, something to be revealed by interpretation; it is also in the
significance inferred from studying form (structure) of content flows – inferring
what the data could be signifying and where it points to, rather than
interpreting where it came from. The object of the study is the form of
information, not its content. And once an information flow had been
formalised, it could be analysed statistically in meaningful ways. The output
would be literally self-signifying (to use yet another Snowden term).

The most important thing about inferences made from self-signifying data
would be that it would enable people to ask different kinds of questions at the
same time as providing statistical indicators suggesting (changing) answers.
What kind of questions? Questions like: ‘What will the future of consumerism
be?’ (See sidebar, page 15).

But what about the content of the material being analysed? It would still exist
as the source of the analysis, but its role would change from being something
to interpret or agree or disagree with, to being its significance as an indicator
in a meta-analysis. Of course, if the analysis is not to occur in a complete
vacuum, reference would have to be made to what sources are saying.

Here great care would have to be given to ensure that the content is reported
as evidence, rather than interpreted as meaning something. Quoting
representative sentences, rather than rewriting them, would be a useful
strategy. Indirect language where the ‘I’-word is never used would be another.

There was a time when journalism expected its practitioners to be objective


reporters. Now, the fashionable thinking is that nobody can be objective, and
reporters are biased in their own interests (whether vested or not).
Unfortunately, instead of using this knowledge to resist bias as much as
possible by taking a disinterested perspective, much journalistic practice is
now increasingly about interpretation, show-biz and broadcasting entrained
ideas, rather than straight reporting and critical thinking. Nevertheless, a clear
distinction between reporter and interpreter should be a vital one for
journalism, and would be a necessary condition for compiling self-signifying
data.

As for the inferences that can be made on the basis of this data, they would
derive first from the patterns of information in the flows of data. The ability to
compare flows over differing periods of time would enable a high degree of
cross checking between flows as an ongoing means of verifying or falsifying
inferences.

A bit like markets and indeed the ‘real world’, the world of self-signifying data
would not be controlled. It would be a world of both trends and surprises
independent from the hurly burly of interpretation. That is not to say that it
would have any greater claim to truth, which is a questionable concept in
itself. Like any scientific data, self-signifying data would be simply another
facet of human reality.

Susan Sontag was asking for a new language to describe the forms of
communications flow. Unlike entrained thought, which closes off possibilities
and questions in its quest for truth through interpretation, the magic of
language is that it can describe and classify instances, while at the same time
being open so that it can describe new unthought of possibilities. Even highly
simplified, artificial languages, such as computer programming codes, can
create vast numbers of applications that were not even conceived by the
language creators.

So, self-signifying knowledge would require new artificial languages that


describe the form of the content, rather than the content itself.

Pioneering practice

“To interpret is to impoverish, to deplete the world - in order to set up a


shadow world of ‘meanings’. It is to turn the world into this world. The world,
our world, is depleted, impoverished enough. Away with all duplicates of it,
until we again experience more immediately what we have.”

Susan Sontag, Against Interpretation, 1964

Self-signifying knowledge practices do, in fact, exist. They have been outside
human experience until recently, other than by a few pioneers who, by
definition, were ahead of their time.

Various disciplines under the heading of content analysis have been have
been working successfully in this way since the 1930s. For example, Allied
intelligence agencies were able to infer German troop movements in World
War Two from a statistical analysis of public train timetables.

Eugene Garfield’s Science Citation Index, which counts the number of times
scientific papers are quoted in other scientific papers – when, by whom and
about what – has been an important influence on scientific thought for nearly
40 years. The self-signifying data that it generates has enabled scientists to
see new patterns of thinking emerging, which would have been simply
impossible using traditional methods of interpretation.

In the UK, two companies have been working in the field of generating self
signifying data for years – Cognitive Edge (formerly Cynfyn)1 and Open
Intelligence (formerly Trend Monitor).2
According to Snowden, Cognitive Edge conceives of language, not as a deep
structure, nor a mental model, nor even a semantic network, but as another
example of a co-evolving, complex adaptive system. People use language to
make up and tell each other narratives of their experience using a wide variety
of media. For Cognitive Edge, these narratives or stories, along with how the
tellers feel about them, are the source of the self-signifying indicators about a
group’s thinking.

How is it done? The first step is the collection of stories, ‘narrative fragments’,
as they are termed. Narratives are collected from groups of people and/or (I
presume) different types of media. At this point, all that exists is content that
would, in the traditional way, have to be read and interpreted for any of its
significance to be appreciated.

What is promised, though, is self-signification. Cognitive Edge has invented


and patented some nifty intellectual/software tools for doing just that, the
latest of which is called a triad.

A triad is a triangle. Each side represents a scalar question about the


associated narratives. Here is an example of a triadic question from the
Cognitive Edge website.

“I would characterise the leadership behaviour in this story as: altruistic (top of
triangle); assertive (bottom left); analytical (bottom right).”

All the user has to do is to move their mouse to position a the round indicator
at the place within the triangle which best represents their view of the
significance of the underlying story. The neat thing is that this self-signifying
triangulation yields a three-dimensional graphic, which signifies without any
intermediaries or interpretation what different groups think about questions,
where they might be open to change, and where they might resist. This
knowledge can then be used by client organisations to manage change using
what are called ‘safe-fail’ interventions.

All a user needs to know is how to do it, requiring a three-day training course,
and SenseMaker software. The business model seems to consist of an
income stream from signing up trainees for accredition, then using the
accredited Sensemaker practitioners to sell bespoke software applications
into big organisations. Come to think of it, doesn’t IBM do that? This is not
surprising since the company was originally a 1980s management spin-off
from IBM.

These self-signifying techniques have been applied in a huge variety of


different contexts from museum staff improvement programmes to anti-
terrorist intelligence research.

Snowden emphasises that the really hard work is in designing the questions,
especially ones with a meaningful triadic structure. He speaks of hiring
anthropologists by the man-year on one project. On the other hand, he says,
children use implementations to produce significant results as well as, if not
better than, adults.

Like Cognitive Edge, Open Intelligence also has stories as its source. The raw
material, from which the self-signifying indicators are extracted, is what is
being said traditionally in published sources, now increasingly in blogs and
potentially in real-time tweets.

Key phrases and links to the source publisher are classified using a faceted
schema currently designed to capture data on the subjects of economy, the
environment and energy. The result is to be able to measure information flows
through channels pertinent to useful questions, such as
consumers/attitudes/change, or economy/risks, or business/opportunities.

Unlike most taxonomies which are designed to help users retrieve documents
to be interpreted, Open Intelligence’s schemas are designed with the purpose
of posing useful questions to the sources, generating indicators that directly
show significant changes in coverage patterns. Trends in coverage can be
reported and inferences drawn by analysts.

Open Intelligence is also developing its own Web 2.0 database application,
which not only enables groups to share intuitively common facted
classification schemas when collecting, analysing and synthesising source
material, but also displays information flows in real time, in standard-time
series graphical formats. The software is now in its alpha test phase. Until this
software becomes available, Open Intelligence
(www.openintelligence.wordpress.com) is using Amplify, (www.amplify.com)
as a Web 2.0 repository for its current self-signifying data collection. Even in
collection mode it provides a free key quotes clipping service for those
interested in the subjects that it covers.

When its software becomes bullet proof, Open Intelligence aims to tap the
intellectual potential of social networks to build collections of classified self-
signifying data, open and free for all to see.

While both Cognitive Edge and Open Intelligence have been around in
various forms for years, in the past few months a whole slew of self-signifying
data providers have sprung up in the ‘Twitterverse’ with names, such as
NewsTrendz and Trendrr. Although as applications they are very primitive,
they are bringing the issue of self-signifying knowledge to the wider public.
These software services and their implications will be reviewed in the next
article on self signifying knowledge.

Notes
1. The description of the work of Cognitive Edge is based on an interpretation
of a talk given by Dave Snowden to the International Society for Knowledge
Organisation (ISKO UK), on April 23, 2009. Given that all interpretation is
subject to degrees of misinterpretation, the sense that I made of it might not
be quite the same as Dave’s. So apologies for any misinterpretation, but
thanks for the sense;
2. The description of the work of Open Intelligence derives from direct
experience, so is not an interpretation, but is a report;
3. Susan Sontag was one of the most important American thinkers of the
secnd half of the 20th century. In a long career, she authored numerous
books and articles about Western culture and modes of communication. She
was always mistrustful of fashionable beliefs and once wrote that: "The most
interesting ideas are heresies".

Jan Wyllie is a founding director of Open Intelligence and Trend Monitor. He is


also author of one of Ark Group’s most successful reports, Taxonomies:
Frameworks for Corporate Knowledge. He can be contacted at
jicw@btinternet.com

Sidebar: Self-signifying knowledge in action - A true story


In 2003, we published a report with the title Consumers: Going for Broke. Its
sources were a representative sample of the best English language business
and daily press in the UK and the US. From the analysis of the data we were
collecting then, it was obvious that consumers’ balance sheets would not add
up. Even then it was not just obvious, but demonstrable, that excess debt
would pop the housing and spending bubbles.

Below is the consumers’ collection schema that we used to create the


pertinent information flows. We wanted to draw inferences about consumers’
economic behaviour, hence the balance sheet type organisation of the
schema.

ASSETS:
Housing; and
Savings.

INCOME:
Employment; and
Pay and conditions.

EXPENDITURE:
Debt repayments;
Spending; and
Taxation.

ATTITUDE:
Change; and
Confidence.

In October 2002, in the early days of the ‘Goldilocks economy’, the report
inferred that:

• Indicators suggest that consumers suffer from a ‘money illusion’. They


have not understood the difficulty a low inflationary (or, even worse, a
deflationary) environment causes for the repayment of their
unprecedented level of debt;
• Indicators in the INCOME/employment category suggest that jobs will
be lost at an increasing rate, so personal debt defaults are liable to
become a growing problem for banks and credit suppliers. These
problem loans will be in addition to the already growing number of
insolvencies and non-performing loans in their corporate loan
portfolios; and
• Although we were right on the money with our inferences indicating a
credit crunch long before it happened, one of the most interesting
findings at the time (under ATTITUDE/Change) was a few stories that
had been classified as ‘thrift’ and ‘poverty’. There was even one
instance classified as ‘profligacy’.

Now six years later, we are in the process of writing an update using a year’s
worth of material from more or less the same sources and using exactly the
same schema. It is hardly surprising that the consumer balance sheet looks a
lot worse now than it did in 2003. However, once again, it is the
ATTITUDE/Change category that holds the most interesting data.

Not only has the category grown enormously, both in absolute numbers and
compared to consumer coverage as a whole, but thrift is now portrayed as a
mainstream, rather than a marginal activity. Even more fascinating is that
analysis has founded new information flows under ‘conscience’, ‘satiation’,
‘self-reliance’, ‘personal development’, ‘health’, ‘crime’, ‘post-consumer
economics’ and (gulp) ‘revolution’.

One possible inference from this data is that the ‘recovery’ is under increasing
threat from significant consumer attitude change.

Note how these kinds of self-signifying indicators can flag up important


changes in behaviour long before they can be detected in conventional
opinion polls, which is why they should be used before framing polling
questions.

S-ar putea să vă placă și