Sunteți pe pagina 1din 19

Automation of the linguistic translation processes: A study on viability.

I. M. R. Pinheiro1

Abstract: In this paper, we present a discussion, of scientific level, on both the viability of
replacing the translator's figure with a software for linguistic translation and the impact of such a
replacement in the quality of the translated text.
Key-words: translation, Fuzzy Logic, Paraconsistency, Sorites Problem, language, logic.

1. Introduction

Our introduction has been split into four sections:

1.1) The Sorites Problem;


1.2) The logical system Fuzzy Logic, as originally proposed by Zadeh, and the Sorites
Problem;
1.3) The paraconsistent logical systems and the linguistic translation processes;
1.4) Intersection of the previous items.

1.1 The Sorites Problem

The word Sorites derives from the Greek word soros2 and originally referred to the following puzzle
(Pinheiro 2006a):

“Would you describe a simple grain of sand as a heap?


No.
Would you describe two grains of sand as a heap?
No.
You must admit the presence of a heap sooner or later, so where do you draw the line?”
[Hyde 1997]

Sorites Problem is a linguistic expression that is universally accepted as referent for any problem
that be considered a variation of the just quoted puzzle, which is also known as The Heap ([Hyde
1997]).
The problem contained in the puzzle is that of determining the specific step in the sequence, or the
number in the sequence of the grain added, which has made the previous 'non-heap of sand' become
a 'heap of sand'.
The original Sorites Problem starts with a 'non-heap of sand' (accepted as such by the audience) and
one grain of sand is added at a time until there clearly be a (as for the audience's judgment abilities)
'heap of sand' ([Hyde 1997]).
The question that must be answered in order for us to be believed when stating that the Sorites
Problem has been solved is: What is the decision strategy regarding the precise moment in time in
which a 'non-heap of sand' has started being a 'heap of sand' that bears strongest, or perhaps
absolute, if it is possible that it be absolute in this case, scientific support?
In other words, the original Sorites Problem consists in as scientifically as possible determining
both location and nature of the separation region (in case such exists) between 'non-heaps of sand'
and 'heaps of sand'.
Human language has been created from observing the personalization of communication (in groups,
communities, special isolated individuals, and others), so that the idea of the Sorites Problem comes
as a shock in the linguistic metiér... .

1/19
The Sorites Problem is about us, human beings, looking for the 'absolute' merit of an entity in what
regards the label we give to it, that is: Instead of being worried about what we are 'putting over the
entity', as individuals, and that was the motivation for the creation of language, we are now worried
about, basically, almost, 'what the entity has to say about it', like it could all be translated into the
entity asking us 'OK, you, 'Kate', think I am a heap of sand, but you, 'Michael', think I am not a heap
of sand, at this stage, with x grains of sand, so am I a heap or not, can you 'teach me' what I am right
now and convince me of why I am such, please?'.
The Sorites Problem has been entertaining the non-scientific community for millennia because it is
obviously the case that there is a separation between 'non-heaps of sand' and 'heaps of sand', once
we have, at the beginning of the puzzle, a 'non-heap of sand' and, at the end of it, a 'heap of sand'
instead, but each step of the puzzle is the result of a minor modification, according to a fixed rule, in
the entity under observation in the previous step, fact that seems to always allow for us to defend
the veracity of the main premise (if I add one grain to the previous amount of sand then it is
obviously the case that such a grain does not make any difference and I still have a 'non-heap of
sand', right?).
Be it because of the fascination caused by the challenge of finding absolutes in what seems to be of
relativistic nature or because of the clear need of refinement of the elements forming the puzzle in
order to have it satisfying the demands of Science, the Sorites Problem seems to move us into
debates regarding the application of the linguistic terms like no other problem has ever done.

1.1.1 How to build a Sorites Problem

Every problem that contains the essence of the Sorites Problem will also contain, in an implicit, or
explicit, manner, a soritical sequence.
Like all mathematical sequences, the soritical sequence, which is not a mathematical sequence itself
but contains a mathematical sequence, has rigid rules in what regards the order of its elements.
One of the possible consequences of changing the order of the elements in some chosen soritical
sequence is the problem, which contains it, starting to hold trivial solution, with the separation
between the equivalent to 'non-heaps of sand' and 'heaps of sand' becoming too obvious to allow for
the problem to be of scientific interest.
After studying, in detail and depth, some famous soritical sequences, we have noticed that each one
of them has its elements organized in either increasing or decreasing order in what regards its
mathematical sequence. Besides, other characteristics, which are common to all of them, have been
easily identified.
Those characteristics are:
a) All the elements are considered solely in what regards the variation of one of their attributes,
attribute that is found mathematically controlled in the sequence through one of its components, and
no other attribute presents any variation in the sequence apart from the attribute that is
mathematically controlled;
b) The order of the elements in the soritical sequence is determined by the increment of the
component of the attribute that varies in the soritical sequence;
c) The first element of the soritical sequence is regarded as absolutely different from the last
element of the soritical sequence and what makes one be regarded as absolutely different from the
other is the fact that one of the two (only) will be told not to hold the attribute solely because of the
amount of the component that determines the order of the sequence;
d) All soritical sequences contain more than three elements and 'perfect soritical sequences' contain
a limited amount of elements.
After building, or selecting, a soritical sequence, we just need to copy the model of the previously
mentioned puzzle, making sure we adapt the referents in it to the new sequence, in order to have a
Sorites Problem.

2/19
1.2 The logical system Fuzzy Logic, as originally proposed by Zadeh, and the Sorites Problem

The Stanford Encyclopedia of Philosophy refers to Fuzzy Logic like this:

The term “fuzzy logic” emerged in the development of the theory of fuzzy sets by Lotfi Zadeh
[Zadeh 1965]. A fuzzy subset A of a (crisp) set X is characterized by assigning to each element x of
X the degree of membership of x in A (e.g., X is a group of people, A the fuzzy set of old people in
X). Now if X is a set of propositions then its elements may be assigned their degree of truth, which
may be “absolutely true,” “absolutely false” or some intermediate truth degree: a proposition may
be truer than another proposition. This is obvious in the case of vague (imprecise) propositions like
“this person is old” (beautiful, rich, etc.). In the analogy to various definitions of operations on
fuzzy sets (intersection, union, complement, …) one may ask how propositions can be combined by
connectives (conjunction, disjunction, negation, …) and if the truth degree of a composed
proposition is determined by the truth degrees of its components, i.e. if the connectives have their
corresponding truth functions (like truth tables of classical logic). Saying “yes” (which is the
mainstream of fuzzy logic) one accepts the truth-functional approach; this makes fuzzy logic to
something distinctly different from probability theory since the latter is not truth-functional (the
probability of conjunction of two propositions is not determined by the probabilities of those
propositions).
Two main directions in fuzzy logic have to be distinguished [Zadeh 1994]. Fuzzy logic in the broad
sense (older, better known, heavily applied but not asking deep logical questions) serves mainly as
apparatus for fuzzy control, analysis of vagueness in natural language and several other application
domains. It is one of the techniques of soft-computing, i.e. computational methods tolerant to
suboptimality and impreciseness (vagueness) and giving quick, simple and sufficiently good
solutions. … .
[Anderson 1996]

In the context of the Sorites Problem, the logical system Fuzzy Logic has been used to assign
random veracity degrees, all contained in the real interval (0,1), either in strictly increasing or in
strictly decreasing manner, to each association of the type (key-assertion of the problem; element
of the sequence), so that each implication of the Sorites Problem may be classified as either true or
false according to the degree assigned to both antecedent and consequent (each implication is
formed by two couples of the type (key-assertion of the problem; element of the sequence) plus the
basic premise). With the gradual, and progressive, acquisition of non-veracity by the antecedent,
due to the just mentioned procedure, a false implication is reached in the sequence of implications,
so that the non-veracity of the last implication is always nicely justifiable, fact that provides a few
researchers with reasons to defend the use of the system Fuzzy Logic in the context of the Sorites
Problem.
There are, however, several scientifically sound arguments that make us acquire certainty that
Fuzzy Logic is an inadequate tool for problems that hold the same nature as that held by the Sorites
Problem.
One of those arguments involves mention to the absence of an explanation that be universally
considered logical for the choice of couples of the type (key-assertion of the problem; element of
the sequence) that are labeled as scientifically unacceptable matches.
In [Hyde 1997], for instance, one may find material referring to such an argument.
The application of the logical system Fuzzy Logic to the context of the Sorites Problem, this far,
seems to be equivalent to the use of two machines: One to translate usual language3 terms into

3/19
mathematical intervals and another to translate mathematical intervals into Classical Logic standard
values.
The impossibility of achieving perfection in the just mentioned translation processes, or in the
performance of the just mentioned machines, derives from the obvious discrepancy between the
nature of the input sets and the nature of the output sets.
Similar problem occurs with the translation of the language terms from the Chinese language into
the English language (as written, and proved, in this article, the Chinese language is phonetically
richer than the English language).
Notwithstanding, there should be no abnormal level of difficulty in the translation of language
terms from the English language into the Chinese language or in the translation of terms from
Classical Logic into usual language.
Close-to-usual language terms are not the same as usual language terms, therefore there is no sense
in proposing the logical system Fuzzy Logic, applied in the just mentioned manner, as a solution to
the Sorites Problem, once such a proposal could only be considered fine if the Sorites Problem were
not the Sorites Problem, but another problem, where the initial language terms were seen as 'almost
usual', not usual (we refer to the replacement of the usual linguistic terms with mathematical
entities, replacement that is not accepted by the lexicon of the time in which the problem is created
or even this far in time). Besides, we obviously need to justify any solution to the Sorites Problem
with argumentation of linguistic nature, for the problem never leaves such a context, but Fuzzy-
Logic-based-reasoning only allows us to present argumentation of mathematical, or at most
mechanical, nature instead.

1.3 The paraconsistent logical systems and the linguistic translation processes

The main difference between the paraconsistent logical systems and the Classical Logic system is
that, in the paraconsistent logical systems, we cannot infer all the allowed possibilities of the system
from 'contradictions'.
If we wanted to explain the previous paragraph in the terms of the Constructive Mathematics
[Bridges 2009], we could write that, with the paraconsistent logical systems, having both a proof of
p and a proof of not-p (under the same logical assumptions and inside of the same logical system)
does not equate having a proof of every assertion that is allowed by the system, as it happens with
the Classical Logic system.
Paraconsistent logical systems are mentioned in abundance in the scientific literature. It is possible
that, in terms of introduction to such systems, [Tanaka 2003] be one of the most accessible scientific
literary sources.
As mentioned in detail in [Pinheiro 2006c], Priest believes that paraconsistency, whilst scientific
phenomenon, is part of the own entities, that is, that it is ontological, whilst Da Costa believes that
paraconsistency is a scientific phenomenon that is not part of the entities, that is, that it is a
phenomenon that belongs solely to the abstract world, or to the purely logical world, or to the
machines world.
One of the most modern suggestions of application of the paraconsistent logical systems has been
made public through the Brazilian conference in Logic from July of 2000, which took place in Sao
Paulo, conference that had both Priest and Da Costa as attendees [Priest 2000a].
The participant that is told to have presented the just mentioned suggestion has exhibited a robot
that prompted humans to enter instructions in its system whenever it received something it
classified as 'conflicting data' from the environment.
To make it all as clear as possible, suppose that a robot has been programmed to, if receiving
information that it classifies as 'blue', raising its right arm and, if receiving information that it
classifies as 'non-blue', do all it can do in a certain order, apart from raising its right arm.
Suppose now that the same robot received data that it has been unable to deal with, which pointed

4/19
to both 'blue' and 'non-blue' at the same time... (conflicting data from the environment).
Such a robot must then attempt to both raise and not raise its right arm, both attempts taking place
at the same point in time.
It is then that we would be saying that either the systems of the robot have entered short-circuit
mode or the robot has crashed.
The robot presented at the SP 2000 conference would not have crashed under the just described
conditions, for it would have stopped consulting its systems by the time of the collection of the
'conflicting data', when it would then prompt humans to tell it what to do next.
Notice here the confusion created by the own philosophers over all that: It is obvious that the
creators of robots have managed to progress in the process of creation of their robots so that they
would not crash in those situations anymore, but such a progress has nothing to do with any
possible application of the paraconsistent logical systems; it has to do, at most, with the
understanding acquired by the creators about the reasons for the crash as they studied those systems
in practice... .
It is obviously the case that we could only call the presentation of the robot, in the Sao Paulo
conference, 'presentation of one possible practical application of the paraconsistent logical systems'
in case the robot were able to decide, on its own, about what to do in that sort of situation!
Notice that they confound formal logical system, therefore a system to deal with premises in their
totality, producing logical results inside of itself, fully described by means of symbols before any
action takes place, with a mix between an incomplete logical system and virtual, or practical,
pieces of logical systems, which could, at most, form a basis of study whilst we are building a new
logical system.
In this article, we focus on one of the typical features of the paraconsistent logical systems:
Contradictory premises are not a problem; they are just one more possibility.
In focusing on the just mentioned feature of the paraconsistent systems, it is important that we
declare whether we take the viewpoint of the ontological paraconsistency or that of the non-
ontological paraconsistency because one of the intentions of this article is helping the computer
scientists sorting out what is of use in Philosophy for them in what regards the automation of the
linguistic translation processes.
The ontological systems are those that assume that the entities are, themselves, contradictory, and
the non-ontological systems are those that assume that the contradictions, about the characteristics
of some entity, are just moments of incompatibility between the reality of the entities in this world,
according to the dominant human perception, and our personal ability to read, communicate, or
express, it, what then leads to the necessity of improvement, or refinement, of the means we use to
read, communicate, or express, the reality of the entities in this world so that we reduce the
discrepancy between the two 'universes'.
Ontological Paraconsistency, as a world phenomenon, has been defended, for instance, by Priest
[Priest 2000a].
Tanaka mentions some of the argumentation presented by Priest in those regards in [Tanaka 2003].
Priest seems to rely on our interpretation, of our senses, to declare that the entities are contradictory
in some regards.
Our senses have been proven to be scientifically equivocated not once, but several times. As trivial
examples of those instances of proof, we have the Parallax Mistake, the orbit debate (is it the sun
around the earth or the earth around the sun?), and the shape debate (is our planet cubic or
spherical?).
In our articles, we have consistently presented argumentation that frontally opposes accepting the
ontological paraconsistency as a scientific reality, so that we would like to make clear that,
whenever we write about paraconsistent logical systems, we are referring solely to the non-
ontological systems.
Non-ontological Paraconsistency has been defended, as previously mentioned, by Da Costa,

5/19
according to Priest [Priest 2000a] and Tanaka [Tanaka 2003], so that Da Costa's argumentation may
be added to ours in support to our claims here with no loss of coherence or consistency.
Notice that, unless the objects are allowed to have interpretation that is not dependent on our
observation, we cannot guarantee that the object itself bears contradictions. Instead, we are obliged
to accept that the contradictions hold large probability of being part of our own internal confusion,
or part of the difficulty of expressing our clarity, of our internal ideas, to others.
Perhaps it is because of the reasons mentioned in the previous paragraph that [Nadin 2008] brings
mention to Peirce's belief on the existence of interpretation that is independent of the observer in
what regards the observed entities.
'Interpretation', however, demands the presence of consciousness. At the machine level, which
would at most be that of the application in Bloom's Taxonomy, it is inconceivable that one thinks of
consciousness, once that belongs, with no doubts, to the analysis and synthesis levels instead, that
is, to those levels of the Bloom's Taxonomy that are considered exclusively human. Some results,
or features, of the human interpretation may even be 'put inside' of a machine, but the machine itself
will not, even so, hold consciousness of what it does, the consciousness remaining with the human
being who has programmed, or created, it instead, therefore the interpretational human skills, as a
whole, considering the usual human being4, cannot be transferred to a machine.
Because of that, it is impossible that the object bears interpretation that is independent of the
observer. Not only the observer is essential figure for any interpretation to exist, but the
interpretation itself is an ultra personal production, tailored by the person expressing it even by the
time of the expression itself.
In our analysis of the Sorites Problem lies the root of all issues that we have mentioned in the
previous paragraphs.
It seems, to us, that the modern philosophers are looking for the absolute, for the total absence of
personalization in human discourse, for a place where there be scientific certainty in the application
of the human language.
If the entities hold own interpretation then there is the right and there is the wrong in what regards
human discourse that be considered interpretation of those entities.
Notwithstanding, it is obviously the case that human judgments have to do with the mental universe
of each human being and the absolute, in this sense, is a place that cannot, therefore, exist.
It does not matter whether we write about the suitability of the adjective beautiful to a specific
human being or about the suitability of the adjective green to a specific desk: Both interpretative
matches have to be made out of ultra personal logical systems, rather than out of any told-to-be-
universal logical system.
It is obviously acceptable that a person spend their entire life calling objects 'universally' told to be
green red and, even so, be immediately understood, at all times they do that, by their acquaintances,
for instance. That cannot be told to be wrong: It is simply how that particular person expresses their
interpretation of the `universal green' in their discourse. It may be the case, for instance, that they
suffer from daltonism and cannot tell, in the usual way, ever, the difference between the two colors
(green and red).
What is interpretation?
Interpretation is the same as ultra personal reading!
We may think that we are contradictory, for example, in what regards what we feel for the man X.
We may think that we both do and do not love him.
However, when a third party observes us, that third party may hold 'absolute certainty' that we do
not love him, for instance.
Thus, it is possible that we believe that we suffer from 'ontological paraconsistency syndrome' in
what regards our feelings of love for X and it is also possible that, when another entity 'reads' us, or
interprets us, they believe that we do not 'suffer' from such a syndrome.
It is obvious that it all has to do with the mental paradigms of each one of us.

6/19
Peirce would like to write here that, independently of what we think (also of what the observing
entity thinks), there is a reading, or an interpretation, which is scientific, of our feelings of love for
X and, therefore, one of us is wrong in their judgment, or even we both are.
Priest, on the other hand, would like to write that we are contradictory in nature and, therefore, all
expressed judgments are correct (we do and do not love X at the same time, under the same concept
of love).
It is obviously the case that the words are tools to express our judgments and their meaning may
change as we apply them... .
This way, we cannot agree with Peirce, for it is not possible that the interpretation be independent of
us, human beings. The interpretation is, itself, expressed by means of words, which are mutant
entities thanks to the element 'personalization of the language', which is always present in human
communication and expression.
Interpretation is also a word itself.
Once the meaning of the words change according to the user, Peirce cannot be scientifically right.
Besides, notice that our mental pictures from when we declare that we love X differ from our mental
pictures, or references, from when we declare that we do not love X.
Basically, when we declare that we love X, our mind focuses perhaps on those moments in which
we tolerated absurd actions of X, but, when we declare that we do not love X, our mind focuses on
those moments in which we have wished for his death, for instance.
We have just reached certainty that our mental paradigms, or pictures, therefore, are not the same
when our assertions seem to be of contradictory nature, so that those assertions are not truly
contradictory and may even be supplementary in nature instead.
Once the mental paradigms differ when apparently contradictory assertions are analysed, Priest
cannot be right in his possible argumentation here either.
As the 'paradigms problem' also explains third party interpretative contradictions, there is no chance
for the Ontological Paraconsistency to be a reality in the concrete world, like it may, at most, be part
of the abstract world, and solely whilst confusion is considered acceptable in it.
With the ontological paraconsistent logical systems out of consideration, it remains to us explaining
how one could connect the non-ontological paraconsistent logical systems to the art of
translating... .
Basically, we may 'know' something and, even so, be completely incapable of expressing that
something with enough coherence, or consistency, that is, in scientific terms, to others.
In one of the examples used by Priest (Priest 2000a) to defend the ontological paraconsistency, for
instance, an observer of a famous painting utters, with the same amount of belief, that a set of stairs
is departing from both a certain point in the painting and another point, distinct from the first point
in the painting and incompatible with it.
Obviously the case that, as the observer finishes uttering the first utterance, there is a shift in their
mental paradigms, so that they feel comfortable, mentally, with uttering the second utterance, which
is perceived, by the audience of theirs (and a computer with voice recognition system could easily
be part of this audience), as conflicting with the first utterance, but is just another scientifically
incomplete utterance of the same person instead.
At the moment of 'listening' to that, however, the audience could easily have to make a decision and
perform an action based on the received data and, therefore, if we ever wanted to scientifically
arbitrate on the soundness of their decision, we would need to scientifically describe what is going
on there in a first moment. For those purposes, we would need to make use of a paraconsistent
logical system of some sort.
Basic translator's reasoning explains the confusion: It all resumes to the observer being 'incapable'
of translating what they think with perfection into words, that is, to the observer being incapable of
making their audience see an image that holds enough similarity to their mental image, from the
time they 'created' the assertion, through their words.

7/19
The lexicon does bring the dominant reference for the words that are listed in it, but it is usually the
case that a list of references, rather than just one reference, is found associated to each word there.
Some researchers have named each reference from such a list 'sense of the word'.
Notice that it is possible that different instances of application of the same sense of a certain word
from the lexicon point to different world references with no conflict, as in the example we present at
the end of this subsection.
For the audience, or readership, to fully accompany what a certain person is stating, it is necessary,
therefore, that their 'sensorial body' be 'adequately 'tuned' into the same 'frequency' as that of the
communicator', say, so that they grasp the sense of the words involved in a way that is free of
mistake.
Obviously the case, sadly, that the translator will have to first be an expert in 'tuning' their systems
to the 'frequency' of the communicator to then be able to actually translate a message, rather than
just 'pretend to be doing so'.
However, most of the time, not even the communicator themselves is consciously aware of their
'frequency' as they 'make efforts to communicate'.
When the painting observer states 'the stairs set comes from point X, here', they have a sight of the
painting that is, let's call, XX.
When they state that the set comes from point Y, X ≠ Y, they have a sight of the painting that is, let's
call, YY, and it is clearly the case that XX ≠ YY.
Therefore, 'here' from their first assertion, and let's imagine that the first assertion was 'the stairs set
comes from point X, here', is of sense XX, but 'here', from the second assertion, and let's imagine the
second assertion of theirs is 'the stairs set comes from point Y, here', is of sense YY, and XX ≠ YY.

1.4 Intersection of the previous items

Due to the extraordinary number of matches, of similar meanings (one could even say of difficult
differentiation), in any other language, for any word in a particular language, one may think that a
certain lexicon word may both translate and not translate, at the same time, into the word chosen
from some lexicon as its equivalent in another language.
Fact is that the words are always referring to something very specific, and very well defined, in the
head of those who use them (paradigms of thought, as explained earlier on in this very paper).
Epistemic reasoning may lead to the understanding of a few translation problems, therefore: If we
ever had instruments to read human minds that could be calibrated as necessary to match the
amount of refinement in the mind of the communicators, we would have far higher probability of
describing precisely, by means of words, the image (or reference) that the communicators see inside
of their heads whilst producing communication to third parties than the probability we currently
hold of doing so.
Even though epistemicism helps us reason and understand what goes on in the translation processes,
the same way paraconsistency may help us understand moments of human hesitation and
indefiniteness, it is obviously the case that, unless human minds are criminally made equal, we will
never eliminate translation inaccuracy in full in what comes to human communication.
Thus, translation inaccuracy is usually a non-negative and non-null presence in any translation
process involving human beings.
Notice here that the own lexicon is born with a non-negative and non-null inaccuracy measurement
value attached to the majority of its entries, once it is a collection of educated analysis results about
educated observations and guesses that refer to the use in discourse of specific tokens of the human
language.
Because the modern communicators base themselves in the lexicon to produce their
communications, and therefore are obliged to translate from the lexicon into their communications,
it can only be the case that the value of the inaccuracy measurement, originally attached to the

8/19
lexicon words, experiences a non-negative increase, per word contained in the communications,
during the production of those.
Notice then that this is the moment in which the professional translator appears and reads those
communications to then re-write them in another language, obviously creating increment, which is
non-negative, for the inaccuracy measurement value.
To make it all worse, once the translator bases themselves in at least two lexicons, of different
languages, their work inherits at least the original inaccuracy measurement value, non-negative
value, attached to each one of the lexicon words that were essential to the translation process,
usually at least one from each lexicon fitting inside of this category.
Basically, we then have the original value of the inaccuracy measurement from the lexicons, which
suffers from non-negative increase in a proportional fashion to the amount of lexicons used, the
value of the inaccuracy measurement of the communicator when producing their communications,
and the value of the inaccuracy measurement of the translator's work (non-negative values) to
consider when trying to interpret a translated text.
What we have written so far, in this subsection, does not imply that the translated text both is and is
not a translation: The translated text is, obviously, a translation.
How perfect such a translation is is another matter... .
We believe that the perfection of the translation is a matter of scientific interest, however.
In the pursuit of studying such a matter, we initially propose that a value, corresponding to the
similitude between the original text and the translated text, say δ, be attached to the resulting text, in
the target-language, so that we allow for both Fuzzy Logic alike processes to take place, in what
regards (mechanically) choosing the most perfect match in the target-language for the source-
language word, and a degree of perfection to be associated to the translated version of the text.
Even though our line of discourse, this far in this article, seems to be directed to proving that there
is a huge difference between what has got human nature and what has got mechanic nature, we have
eventually made light reference to the human conventions in language, which have, as objective,
making some human discourse elements universally single out an undoubtful human-related
element.
It is easy to infer, from the last paragraph, that, even though linguistic translation, in general, cannot
be seen as fully passive of automation, whatever it is that may be regarded as a product of
convention in human language may be seen as holding linguistic translation processes that are fully
passive of automation.
Technical lingo, or technical jargon, is seen as a product of convention in human language.
Therefore, the processes of linguistic translation in purely technical texts may be regarded as fully
passive of automation.
And it is in the automation of the processes of the technical linguistic translation that we can
imagine a few non-classical logical systems, such as Fuzzy Logic and the logical systems that are
classified as paraconsistent, being applied with success.
The work performed in computers along the lines of our last paragraph has high probability of
being precursor because the majority of the creators of the non-classical logical systems has created
those systems without any concern related to possible real life applications.
On the other hand, any research made along the lines of ours, on the association 'technical language
with machine', may easily generate precursor results, which may be successfully applied, in
practice, in the mechanization of the translation of purely technical texts.
The computer program is nothing else apart from the reflection of the programer's reasoning, which
is affected, in varied degrees, by the human conventions that are computer-related (for instance, the
structure of the computer language in which the program is written).
Because of that, there is a huge value that is added to the programming technique, therefore to the
computer program, each time the concerned programer reads texts like ours.
The first message that we imagine to have been passed to our readers by now is that translating is a

9/19
semiotic process (point of view also defended in [Gorlee 1994], for instance), not a mathematical
process (point of view defended by Wittgenstein, for instance, in [Wittgenstein 1940]).
The basic difference between linguistic translation processes and mathematical processes may also
be understood by means of graphical illustration associated to the Bloom's scale (see [Kovalchick
2004] for some details on The Bloom's Taxonomy), for instance.
While the linguistic translation processes are all located in the top of the Bloom's scale, where
analysis and synthesis are, if the purely technical texts be excluded from the linguistic universe
under consideration, the mathematical processes oscillate in the Bloom's scale, passing by varied
levels for each new mathematical problem, but sometimes remain exclusively in the bottom levels,
where the comprehension and the application lie (just like when the problem follows a model of
another problem, which has already been solved by the person).
The second message that we imagine has been passed to the readers of this article by now is that the
paraconsistent logical systems are ideal tools to translate, into computer language, the hesitation, or
the human uncertainty, over something.
For instance, an isolated word, of a certain translated text, therefore in the target-language, may or
may not mean the same as the original word meant, in the source-language, in terms of world
reference.
The third message that we imagine has been passed to the readers of this article by now is that the
logical system Fuzzy Logic is the ideal tool to translate, into computer language, the degree of
perfection, or 'universal belief', of the translated technical text.
The degree of perfection in the translation processes could be assigned by the own translator as they
translate the text, for instance.
Besides all the just mentioned messages, in analysing all we have studied this far, we notice that the
most important contribution of the logical systems we here mention to the linguistic translation
processes executed by machines, considering the current body of knowledge of Computer Science,
is that given to the treatment of both inferences and decisions contained in those processes and we
also notice that the Sorites Problem seems to fit in almost every piece of the translation processes
analysis.
The issue on where the line lies may be raised even when we explore the tolerance in what regards
inaccuracy measurement values, for instance.
Notwithstanding, in this article, we have used the Sorites Problem mostly to convince the readers of
the impossibility of the existence of absolutes in non-technical language, that is, outside of the
human convention.
The inexistence of absolutes in the translation of non-purely technical texts leads to 'universal'
acceptance of the claim 'we can only automate, in the intended sense of the concept automation, at
most purely technical texts translation' and, in pursuing 'the dream' of automating purely technical
texts translation, we can never belittle the value of studying machine reasoning, or logical systems,
for we may fail badly in the task if we do so.
For the purposes of clarity in presentation, we would like to inform the readers, at this stage, about
the way this article has been organized.
The current section is called Introduction and the sections that follow this section are called, in
order of presentation:
- Automated translation of technical texts;
- The Sorites Problem and the Chinese language;
- The logical system Fuzzy Logic, the paraconsistent logical systems, and the automation of the
technical translation processes;
- Conclusion;
- References.

10/19
2. Automated translation of technical texts

Linguistic translation, in general, is split into technical and non-technical translation, despite what
'Wikipedia' asserts in [Wikipedia 2003].
Under the sub-title 'non-technical linguistic translation', we find literary translation and popular
flicks translation, for instance.
It is perhaps interesting to mention that `interpretation', a term seen in language professionals'
syndicates websites, for example, that sort of linguistic translation involving non-written material, is
considered something apart, and it is not referred to, by language professionals, by means of the
word translation.
The classifications explained in the last paragraphs are found mentioned, for instance, in the
marketing material for 'Sintra' (see [Sintra 1998]), and Sintra is a major association of language
professionals of a major Country (Brazil is the fourth Country in size in the world and had a
population that was approximately ten times bigger than the Australian population in 1999).
In linguistic translation, it is mandatory that one holds both a source-language and a target-
language, that is: A language from which the text is being translated and a language into which the
text is being translated, respectively.
Technical language, the object of the technical linguistic translation, has been created with the same
aim as Classical Logic: Both universalization and efficiency, in all aspects of communication, of the
human activity to which it applies.
This way, the technical linguistic translation has to bring minimum amount of options to the couple
(source; target) of words for any randomly chosen word from the source-language and it is,
therefore, as previously stated, given the usual number of possible matches for each randomly
chosen word in any language, the only possible candidate to mechanization, once one cannot think
that any human process has been successfully mechanized if the mistakes contained in the output of
the mechanized process are scientifically unbearable.
Mechanizing processes means transferring what was previously made by the human body to the
machine mechanisms.
In what applies to linguistic translation, we here refer to visual, interpretative, and communicative,
processes.
What all the just mentioned processes have in common is the dependence on the inner, or private,
logic of the individual whose body is being used to run them.
What the 'dream software' for linguistic technical translation has to do, therefore, is imitating the
inner, or private, logic of the translator who is regarded as best technical translator for the two
languages under consideration.
In what regards reading and communicating, we can say that the machine performs, at least
sometimes, even better than the human being.
The 'place' where the difference lies, and perhaps we can say will forever lie, in favor of the human
being is, with no doubts, the interpretation.
It is possible that, whilst interpreting the text in the source-language, in purely technical translation,
the translator keeps their brains activity inside of the lowest levels of the Bloom's Taxonomy
reaching, at most, the application level.
In such hypothesis, the mechanization of the translator's private logic is obviously viable.
All that the 'dream software' has to then do is having the same list of linguistic elements that the
translator possesses in their mind when interpreting in order to being able to perform the translator's
work in the same way that the translator would, or even in a more perfect way.
The job of the 'dream software' is, consequently, in the just mentioned case, reduced to using a
'translation function', which chooses the best match in the target-language for each word, or
expression, or group of words, in the source-language.

11/19
Not mattering how the translation is performed, however, the translation process may only be
regarded as perfect in a relative manner, once the only way to evaluate the perfection in the process
is studying, as from God's perspective, the mental images both of the writer of the original text, in
the source-language, and of the reader of the translated text, in the target-language.
In what concerns scientific studies on translation, we can make mention, for example, to studies
about methods employed by the translators to make decisions that are connected to the act of
choosing linguistic equivalents to source-language terms in the target-language.
One of the just mentioned studies is found described in detail in [Chen 2000] and refers to both the
monolingual research method, or the monolingual retrieval technique ([Chen 2000]), and the
bilingual research method, or the cross-language retrieval technique ([Chen 2000]).
According to the dominant professional theory on translation, the monolingual research method is
that in which the language professional deepens their understanding on chunks of text in the culture
of the people of the source-language to then look for the same 'mental image', or for the same world
references, in the culture of the people of the target-language, what then brings, automatically, to
both brains and hands of the language professional, the corresponding chunks of text in the target-
language, and the bilingual research method is that in which the language professional goes word by
word in both languages, refining the resulting text at the end of such a process.
In [Chen 2000], Chen exhibits numerical comparisons between results of the search for a match by
phrase and results of the search for a match by word and concludes that the process of search for a
match by phrase works better in the translation of texts in or to Chinese than the process of search
for a match by word.
We defend contextualism also in what comes to the translation of texts instead and, therefore, we
defend any thesis stating that the process of search for a match by chunks of text returns better
results than both the process of search for a match by phrase and the process of search for a match
by word. As a consequence of defending contextualism, however, we trivially believe in Chen's
results, once search for a match by phrases should return much better results than search for a match
by words.
It is possible that one can use the same method, which has been used by Chen to produce evidence
on his assertion, to produce evidence on ours, but definite proof is what is not missing in what
comes to our thesis.
Some trivial examples of gross mistakes in translation of English into English due to the preference
for either word-by-word or phrase-by-phrase translation, instead of for context-based translation,
are found in our article on Contextualism, to mention at least one source of such examples.
Whilst our last paragraphs may be told to refer to translation in general, it is obviously the case that,
in technical translation, the translation made with basis at most in phrases may return perfect results
due to the so exotic nature of the purely technical texts.

3. The Sorites Problem and the Chinese Language

We have decided to discuss part of the complexity involved in the translation of texts from Chinese
into English, instead of part of the complexity involved in the translation of texts from any other
language into the English language, because Chinese is the most spoken language in the world (see
[Rosenberg 2010], for example).
Notwithstanding, were it not the case that the Chinese language enjoyed such a status, the difficulty
involved in the translation of purely technical texts from Chinese into English would easily justify
our preference for the Chinese in this article, for the Chinese language is one of the so few
languages that impose difficulties that are comparable to those found in literary translation when it
comes to purely technical translation to at least the language professional who has not been born in
Asia.
The intentions of ours, with this section, are both making the possible candidate to systems analyst

12/19
in charge of the automation of the technical translation processes in the world, or of part of them,
familiar with the problems that challenge the translator's brains in the exercise of their profession
and trying to make the emergence of insights in their analyst's brains possible in what regards
translating the private logic of the translator into their systems.
The Sorites Problem has become part of our expertise matters set and applying it to our studies on
technical translation seems to be not only a natural scientific step, but also a necessity (to make our
message be correctly understood by our readership).
As for the Chinese language, the understanding here exposed has been acquired by us through
consultation to the sources [XIAOQING 1995], [CHEN 2000], and [MANSEI 2003].
Romanizing a Chinese word means expressing, in writing, that word in terms of the American
alphabet.
Alphabets are created with basis in the occlusion patterns and some other cultural traits of the
peoples.
Once the word 'jiao', for instance, Romanized from the Chinese language, corresponds to three
different words in the Romanized Chinese->English lexicon, and the Chinese words that originate it
are also distinct in the Chinese->Romanized Chinese lexicon, and are three in amount, we know
that it must be the case that it has been impossible to get the Romanization process to return perfect
equivalents in the American alphabet in terms of phonetics.
[XIAOQING 1995] lets us know that, if we 'de-Romanize' a Romanized Chinese word, we end up
with at least five possible choices of words (Chinese alphabet signs), and those possible words
differ from each other only in terms of intonation when it comes to reading them (native Chinese
person reading them).
Moreover, [MANSEI 2003] lets us know that the meaning of the Chinese words is usually
determined by not only the words under immediate assessment, but also by the word that follows
those in the discourse.
The complexity of the Chinese written discourse, however, does not reduce to the existence of
impressive variety in its dialects or to the greater phonetic wealth, when put against the English
language, or even to the necessity of recursion to the next word to fully determine meaning, once
there are still, for instance, different choices of alphabets (see [HASAN AND MATSUMOTO
2000]).
To explain how, even though the cardinality of the set of possible meanings for the word 'jiao' be the
same both in English and non-Romanized Chinese, and each one of the possible meanings have
only one corresponding word in each mentioned language, we end up with a single Romanized
word as reference in both the Romanized Chinese->English lexicon and the English->Romanized
Chinese lexicon, we need to detail even more the linguistic process involved in the making of the
mentioned lexicons.
Basically, the Chinese people communicate, in writing, through an alphabet that contains drawings
with meaning, called ideograms, and those drawings do not hold immediate equivalents in the
English language if considered individually, that is, per cell that looks, to the native users of the
English language, like a letter.
Each one of the sets of ideograms with different meanings in the Chinese language, which we have
identified, via lexicon, as equivalents to 'jiao' (Romanized Chinese), has got unique writing.
As one may read from [XIAOQING 1995], for instance, were English a language that allowed us to
place enough graphical accents on the word 'jiao' in order to produce close-to-perfect phonetic
equivalent to the non-Romanized original Chinese word in the English language, we would have
each one of the Chinese sets of ideograms that translates into jiao in Romanized Chinese being
translated into a word of same meaning, and unique writing, in the English language instead.
The English language, therefore, possesses smaller number of possibilities with four letters, in
terms of 'culturally acceptable words', than the Chinese language does, if both are seen under the
light of their American alphabet writing.

13/19
The Chinese language is, consequently, phonetically richer than the English language.
It is perhaps of use mentioning that Chen [Chen 2000] proves that the phonetically poorer language,
English, generates a set of words for each word in Romanized Chinese and that the 'monolingual
retrieval' is a much more perfect technique than the 'cross-language retrieval' at least in this
particular case (Chinese language).
In general, a native of China will see the word 'jiao' and automatically imagine five different ways
of saying it ([XIAOQING 1995]):
a) Long sound with constant pitch;
b) Short sound with falling/rising pitch;
c) Long sound with falling/rising pitch.
We now apply the theory we have developed for the analysis of the Sorites Problem to the linguistic
issues just dealt with in this article and we start by creating a set S, of sonic variations, imitating the
reasoning that we have exhibited in [PINHEIRO 2006a].
This way, take S to be the set {x+α, … , x+nα, …, nx+ α, …, nx+mα}, where x means 'short sound',
which, multiplied by a special real number a (making n=a in nx), gives us a 'long sound', and α
means 'falling pitch', which, multiplied by a special real number b (making m=b in mα), gives us a
'rising pitch', the constant pitch being reached through multiplication by another special real number
c (cα), which is smaller than b.
The set built in the manner that we have just described also includes any other sonic variation that
lies between the extremes and, therefore, may also be used to describe those.
More formally:
S = {x+α, … , x+nα, …, nx+ α, …, nx+mα}, where n ε [1,a] and m ε [1,b].
At this stage, recall that the alphabets have been created with basis in the vocal sounds that seemed
to hold meaning, what implies that each community of sociological relevance has created its
alphabet with basis in its own abilities to emit sounds (dominant occlusion peculiarities, dominant
voice patterns, and others).
The just described set, inspired in the analysis of the Sorites Problem, provides us with the certainty
both that there is a separation between each couple of possible sonic emissions and that this
separation is determined by the way in which the person says, or writes, words in the Chinese
language.
On the other hand, S also gives us the idea that the mentioned separation is completely inaccessible
(both to the physical capacity of the occidental people (occlusion and audition) and to their natural
patterns of writing), so that all that we have written about S, this far, may receive the same
objections that the Sorites solution via the application of the logical system Fuzzy Logic has
received until someone be able to 'exhibit' the separation, in a scientifically irrefutable way, in case
this be possible (different people may emit similar, but not identical, sequences of sounds and be
believed to have uttered the same word, for instance).
The word 'jiao' bears five possibilities of phonetic reading in 'almost-Romanized' Chinese, as one
reads in [XIAOQING 1995], but only three of those possibilities attracts meaning up to now.
Those possibilities (with meaning) are described in the previously mentioned items (a, b, and c (see
[XIAOQING 1995])).
Their meanings are, respectively: 'to teach', '0.1 yuan', and 'to shout' ([XIAOQING 1995]).
It is right here that the key-question of the Sorites Problem emerges: When is it, exactly, that '0.1
yuan' ends and 'to teach' starts in the increasing sequence formed with the members of the set S?
The Sorites Problem in much equates, in consequences of going through it either in practice or
theoretically, the Turing machine contests, and such a fact may be clearly noticed by now: It
produces certainty in us that human beings can do much more than the machines, or can go way
beyond any process that may be successfully transferred to them.
Notice that, if the purely technical translation from Portuguese into English, for example, may be
fully mechanized without many problems, the purely technical translation from Chinese into

14/19
English will present maximum level of difficulty in its mechanization: It is as if the systems analyst
had to re-create both the Romanized Chinese-English lexicon and the English-Romanized Chinese
lexicon to only then start to build their system, once, to mechanize the events involved in the
process of translation for real, they will have to go from the 'almost-Romanized', rather than the
Romanized, Chinese directly into English and vice-versa.
As for the soritical problem, observe that we can trace one line starting at 'heap of sand' and ending
at 'zero grains of sand' and another line starting at 'to teach' (long sound with constant pitch) and
ending at 'to shout' (short sound with falling pitch).
To connect 'heap of sand' to 'to teach' and 'zero grains of sand' to 'to shout', as equivalents, all we
have to do is building an equivalence function between objects and human actions, once 'to teach'
naturally conflicts with 'to shout' in every possible pedagogical theory of scientific foundations
(worth taking mental note of how broad the choice of contexts is in terms of application of the
Sorites Problem: We have departed from entities of concrete nature to get to everything that is
possible in language here, for instance).

4. The logical system Fuzzy Logic, the paraconsistent logical systems, and the automation of
the technical translation processes

We suggest that the inferential reasoning of both the logical system Fuzzy Logic and the
paraconsistent logical systems be used in the automation of the processes of technical translation, as
previously mentioned in this very article.
If we consider, for example, the word 'jiao', from the Chinese language (Romanized version), in an
isolated manner, as it may be the case in technical translation, that the simplest element, which
bears sense in written communication, a lexicon word, be considered in isolation, the search for the
best match in the target-language, say English, could be described through the following steps
(considering all that we have defended as truth in our article so far):
1) Monolingual research in the Chinese language so that we reach perfection in reasoning or
maximum level of refinement in the corresponding mental image;
2) Search for the mental image attained in the first step in the English language and storage of
the respective referents;
3) Analysis of both the resulting couple of words (source; target) and the process this far;
4) Grading of the perfection of the couple;
5) Refinement, superficial re-make of the process, or stop.
Suppose that we have taken the first step in the just mentioned list of items and we now hold the
picture of a person shouting 'inside of our minds'.
After following the instructions contained in the second item of our list, we have written in a piece
of paper: To shout.
Say that we hold 70% (seventy percent) of belief in the perfection of our monolingual research
results and we have 50% (fifty percent) of belief in both the perfection of our equivalence of mental
images and the certainty of our referent corresponding to 'to shout' in English.
We now must decide between keeping the couple ('jiao'; 'to shout'), perhaps refining the result, or
re-starting from the first step.
If the logic of our program were classical, no decision could be made at this height of facts and the
program would probably halt because all that could be true would be true for this program.
One cannot accept conflicts, in Classical Logic, as basis for decision making and we have, at this
very moment, as classical result, that 'jiao' both does and does not translate into 'to shout', once, for
instance, there is no 100% (one hundred percent) logical certainty to either side of the story.
If we apply the inferential reasoning of both the Fuzzy Logic system and the paraconsistent logical
systems, mixed, in our programming, however, at this height of events, we may keep on having the
computer 'thinking in our place' without problems.

15/19
According to the logical system Fuzzy Logic, because our monolingual research has returned at
least 50% (fifty percent) of accuracy in both cases, the source-word must be accepted as equivalent
to the target-word (see, for instance, [Hajek 2006]).
Now, suppose that the results of the research into the English-Romanized Chinese lexicon are more
believed by us than the previous research process results.
We will have to elaborate tests and comparisons and we will still have to decide over criterion of
arbitration, for instance, amongst others.
The same reasoning would apply to the situation in which we find out, for instance, that 'to yell' and
'to shout' coincide in our mental imaging processes.
It is just that now we have a horizontal decision, involving only one of the languages under
consideration, to be made, instead of a vertical one, if the line we have previously drawn is taken
into account.
Once the logical systems correspond to mental processes imagined as possible by at least those
creating them, the application of any of them to the translation process should be acceptable.
However, if the logical system used in automated translation is not sufficiently similar to the private
logical system of the professional regarded as best translator of the couple of languages being
considered, we run the risk of building a system that is doomed to not sell, for instance.

5. Conclusion

In this article, we have tried to pass some of our insights about the possibility of automation of the
processes of purely technical translation to our readers, once we have asserted, with no hesitation,
that the automation of the processes of non-technical, or mixed, translation is absolutely out of
consideration (and we also produced sound scientific evidence on that).
Even in terms of purely technical translation, however, we present some of the extraordinary
difficulties that we believe could be involved in the processes of translation for the couple (Chinese;
English), so that we actually assert that there might be couples of languages that do not allow for us
to think of automation at all in what regards translation.
The best gift of this article is the creation of associations between the art of translating and the
Sorites Problem, the art of translating and the logical system Fuzzy Logic, as well as between the
art of translating and the paraconsistent logical systems.
The underlying message, in the entire article, seems to actually be that there are some places of
human experience, and activity, in which the computers should not be desirable presences.
One of those places is the translation that is not purely technical.
One of the reasons that we present to support the existence of such places is that the performance of
the machine will never be equal or superior to the performance of a normal human being in at least
certain situations, but it is obviously the case that automation can only refer to adequate
replacement of human beings.

16/19
10. References

[Anderson 1996] Anderson, C. A.; Terence, B.; Tamar, G.; others (1996). Stanford Encyclopedia of
Philosophy. Found online at http://plato.stanford.edu/about.html. ISSN 1095-5054. Accessed on the
28th of May of 2009.

[Bridges 2009] Bridges, D. (2009). Constructive Mathematics. Stanford Encyclopedia of


Philosophy. Found online at http://plato.stanford.edu/entries/mathematics-constructive/. ISSN 1095-
5054. Accessed on the 28th of May of 2009.

[Casti 1999] Casti, John (1999). Cinco Regras de Ouro. Editora Gradiva. ISBN: 9726626919.

[Chen 2000] Chen, Aitao (2000). Phrasal Translation for English-Chinese Cross Language
Information Retrieval. Citeseer, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.3287.
Accessed on the 26th of September of 2008.

[Da Costa 2006] Da Costa, Newton (2006). Curriculo Lattes.


Http://buscatextual.cnpq.br/buscatextual/visualizacv.jsp?id=K4787165A0. Acessed in 2006.

[Gorlee 1994] Gorlee, Dinda L. (1994). Semiotics and the Problem of Translation: With Special
Reference to the Semiotics of Charles S. Peirce. Approaches to Translation Studies 12, Rodopi, pp.
87-114, <<Wittgenstein, translation and semiotics>>.

[Hajek 2006] Hajek, P. (2006). Fuzzy Logic. Stanford Encyclopedia of Philosophy. Found online at
http://plato.stanford.edu/entries/logic-fuzzy/#2. ISSN 1095-5054. Accessed on the 1st of June of
2009.

[Hasan 2000] Hasan, Md. Maruf and Matsumoto, Yuji (2000). Japanese-Chinese Cross Language
Information Retrieval: An Interlingua Approach. Computational Linguistics and Chinese Language
Processing. Vol. 5, nr. 2, August 2000, pp. 59-86.

[Hyde 1997] Hyde, Dominic (1997). Sorites Paradox. Stanford Encyclopedia of Philosophy. Found
online at http://plato.stanford.edu/entries/sorites-paradox/. ISSN 1095-5054. Accessed on the 31st of
October of 2000.

[Kovalchick 2004] Kovalchick, A. and Dawson, K. (editors) (2004). Education and Technology: an
encyclopedia. Vol. 1, ABC-CLIO, ISBN: 1576073513 9781576073513.

[Mansei 2003] Mansei, Martin H. (2003). Oxford Concise Chinese-English and English-Chinese
Dictionary. Oxford University Press, 3rd ed. ISBN: 7100039339.

[Nadin 2008] Nadin, M. (2008). Semiotics for the HCI community. Online at http://www.code.uni-
wuppertal.de/uk/hci/Concepts/welcome.html. Accessed on the 27th of September of 2008.

[Parker 2010] Parker, Philip M. (2010). Definition of mass, Greek (transliteration), Webster's online
dictionary with multilingual translation, http://www.websters-online-
dictionary.org/definitions/mass?cx=partner-pub-0939450753529744%3Av0qd01-tdlq&cof=FORID
%3A9&ie=UTF-8&q=mass&sa=Search#922. Accessed on the 18th of October of 2010.

[Pinheiro 2006a] Pinheiro, Marcia R. (2006). A Solution to the Sorites Paradox. Semiotica, ¾, pp.

17/19
307-326.

[Pinheiro 2006b] Pinheiro, Marcia R. (2006). A Summary of the Statements Contained in A


Solution to the Sorites and Further Details on the Solution. Http://www.scribd.com/tradutora,
preprint. Accessed on the 22nd of December of 2010.

[Pinheiro 2006c] Pinheiro, Marcia R. (2006). A Paraconsistent Solution to the Sorites Paradox.
Http://www.scribd.com/illmrpinheiro2, preprint. Accessed in 2009.

[Priest 2000] Priest, Graham (2000). Introduction to Non-classical Logic: Moving about in worlds
not realized. Cambridge University Press. ISBN-10: 052179434X.

[Priest 2000a] Priest, Graham (2000). Personal communications with M. Pinheiro during the
acquisition of the UQ Postgraduate Diploma in Logic by M. Pinheiro. UQ, 2000.

[Priest 2006] Priest, Graham (2006). Professional webpage. Http://www.st-


andrews.ac.uk/philosophy/old/gp/gp-papers.html. Accessed in 2006.

[Read 1995] Read, Stephen (1995). Thinking About Logic: An Introduction to the Philosophy of
Logic. Oxford University Press. ISBN-10: 019289238-X.

[Rosenberg 2010] Rosenberg, Matt (2010). Most Popular Languages. About.com.


http://geography.about.com/od/culturalgeography/a/10languages.htm. Accessed on the 6th of
December of 2010.

[Sintra 1998] Sintra website authors (1998). http://www.sintra.org.br/site/index.php?pag=valores.


Accessed in 2006.

[Tanaka 2003] Tanaka, Koji (2003). Three Schools of Paraconsistency. Australasian Journal of
Logic, July.

[Xiaoqing 1995] Xiaoqing, Z. K. (1995). Grundkurs Der Chinesischen Sprache. Sinolingua. ISBN
(Band 1): 7-80052-476-0.

[Wikipedia 2003] Wikipedia authors (2003).


http://en.wikipedia.org/wiki/Image:Wiktionarylogoen.png. Accessed in 2006.

[Wittgenstein 1940] Wittgenstein, translation, and semiotics. The Wittgenstein Archives at the
University of Bergen (WAB), chapter 5, p. 110. http://wab.aksis.uib.no/wab_contrib-gdl.pdf.
Accessed on the 29th of November of 2010.

[Zadeh 1965] Zadeh, Lofti (1965). Fuzzy Sets. Information and Control, 8: 338-353.

[Zadeh 1994] Zadeh, Lofti (1994). Preface in R. J. Marks II (ed.). Fuzzy Logic Technology and
Applications. IEEE Publications. ISBN-10: 0780313836. ISBN-13: 978-0780313835.

18/19
Notes:

1. PO Box 12396, A'Beckett st, Melbourne, VIC, AU, 8006.


2. In the English language, the word soros translates into heap (see [Parker 2010], for
instance).
3. We call usual language the linguistic expression that is not immediately seen as machine
friendly.
4. We call usual human being the human being who is mentally and physically, as for the body
pieces that are strictly necessary for the interpretation under consideration to be made and
expressed, fit.

19/19

S-ar putea să vă placă și