Sunteți pe pagina 1din 11

Corporations as Intentional Systems

William G. Weaver

ABSTRACT. The theory of corporations as moral persons was first advanced by Peter French some fifteen years ago. French persuasively argued that corporations, as persons, have moral responsibility in pretty much the same way that most human beings are said to have moral responsibility. One of the crucial features of Frenchs argument has been his reliance on the idea that corporations are intentional systems, that they have beliefs and desires just as humans do. But this feature of Frenchs thought has been left largely undeveloped. Applying some philosophical ideas of Daniel Dennett, this article provides support for Frenchs contention that corporations are intentional actors by analyzing what is meant by the term intentional system, and showing why corporations should be thought of as, in many important ways, indistinguishable from humans.

to support any theories with socio-political ramifications, but I see Dennett as providing crucial support for Frenchs claims.

I. Corporations and intentions French makes many arguments in support of his theory of corporations as moral persons, but perhaps the most crucial of these arguments needs more elucidation than it has so far received. This argument develops out of the claim that corporations are intentional systems; coherent actors that have intentions, beliefs, and desires just as do human beings. As French writes,
[t]o be the subject of an ascription of moral responsibility, to be a party in responsibility relationships, hence to be a moral person, the subject must be at minimum an intentional actor. If corporations are moral persons they will evidence a noneliminatable intentionality with regard to the things they do (1984, p. 38).

The theory of corporations as moral persons was first advanced by Peter French some fifteen years ago. French persuasively argued that corporations, as persons, have moral responsibility in pretty much the same way that most human beings are said to have moral responsibility (1979, 1984). Frenchs argument for corporations as moral persons has been attacked in numerous different ways over the years, but I think that there are avenues of defense for Frenchs position that have not yet been fully explored. This article argues that French gets to the right conclusion that corporations are moral persons and offers support from heretofore untapped sources. In my argument I make liberal use of the work of Daniel Dennett, an innovative, if controversial, philosopher of mind. Dennett has not been used
William G. Weaver is Asst. Professor of Political Science and the University of Texas at El Paso.

In other words, for French, the actions of a corporation are not reducible to a description of what human actors do on behalf of the corporation. Corporations have personalities, tendencies, blind spots, character flaws, character strengths, exceptional abilities, misconceptions, and dreams. These are attributes of the corporation and not simply a shorthand way of summing up the aggregation of characteristics of its employees. French makes this point clearly when he says:
For a corporation to be treated as a moral person, it must be the case that some events are describable in a way that makes certain sentences true:

Journal of Business Ethics 17: 8797, 1998. 1998 Kluwer Academic Publishers. Printed in the Netherlands.

88

William G. Weaver
sentences that say that some of the things a corporation does were intended by the corporation itself. That is not accomplished if attributing intentions to a corporation is only a shorthand way of attributing intentions to the biological persons who comprise, e.g., its board of directors. If that were to turn out to be the case, then on metaphysical if not logical grounds, there would be no real way to distinguish between corporations and crowds. I shall argue, however, that a Corporations Internal Decision Structure (its CID Structure) provides the requisite redescription device that licenses the predication of corporate intentionality (1984, p. 39).

Frenchs argument that corporations are intentional actors has been subjected to several varieties of criticism. The first form of criticism, usually more implied than argued, holds that we are misled into thinking that corporations are moral persons simply because of shorthand references in our language (Garrett, Pfeiffer). Here the complaint is a quasi-Wittgensteinian one that we have been seduced by our grammar, that we improperly ascribe a subjectivity to corporations simply because we use them as types of subject-actors in our language. We create a metaphysics out of an accident of metaphor. A second type of criticism attacks Frenchs distinction between conglomerates and aggregates (Donaldson, 1982, Pfeiffer). French carefully devises criteria to distinguish between collections of individuals that may be ascribed moral responsibility and those that cannot. He says that conglomerates, or human collectivities eligible for moral personhood, must have: (1) an internal decision structure; (2) enforced standards of conduct; (3) defined roles by which power is wielded over others. Lynch mobs, riots, etc., are made up of human actors, and may be said to be human collectivities, but they hardly qualify as candidates for moral agents. But it seems to me that French puts too much stock in these criteria, and that they can be either supplemented or supplanted by more effective devices of evaluation. These supplemental evaluative devices will be explained in section III. Patricia Werhane, in an effort to split the difference between French and his critics, effectively argues that corporations do not act and there-

fore cannot be moral persons (1985, 1988). Nonetheless, Werhane believes that because collections of these individual actions on behalf of a corporation can create anonymous policies and practices no longer traceable to individuals, policies and practices which, in turn, generate corporate activities, I claim that corporations are collective secondary moral agents (1988). Werhane seems to deny that corporations are intentional systems, a position counter to that held by French. Nevertheless, some critics go on to lump Werhane and French together, believing that they both reach the same flawed position from two different directions. As Jan Edward Garrett writes,
Both French and Werhane seem to locate the unreassignable portion of corporate moral responsibility with the corporate practices or policies as such. Werhane has interpreted French as arguing that because policies and practices that are the source of corporate action are themselves products of corporate intentional activities, the actions that result are not solely distributable to individuals (p. 539).

And Garrett goes on to assert that [t]he critic of UCMR [Unredistributable Corporate Moral Responsibility] need only insist that the individual moral responsibility for corporate action directly caused by collectively determined policies lies with individual actions further back in time, perhaps spread over many years (pp. 359540). But of course, one can insist away anything. By saying that UCMR disappears if we just look further back in a causal chain of events is unhelpful and unpersuasive. It does not dissolve the argument of UCMR to say that corporations are wholly comprised of small operations across time. As we shall see in our discussion of Thomas Donaldson, at root here is an effort to find a criterial distinction between persons and nonpersons. Garrett thinks that reducing corporations to causal chains does away with the possibility that the corporation is a moral person. This argument is discussed at some length later in this essay. The point Garrett makes anticipates and is related to a third type of criticism made against French. Thinkers who take up this argument see

Corporations as Intentional Systems it as only a matter of common sense that a person must also be a human being, and attack French for drawing a crucial distinction where none can be drawn (Donaldson, 1982; Pfeiffer; Velasquez, 1983). Adherents of this argument tend to still be under the influence of Enlightenment metaphor, believing that humans are uniquely privileged by Nature or God and have souls or a special faculty called Reason which makes them different in kind from any possible other complex system. Here I am mostly concerned with defending the theory that corporations can be moral persons against the third sort of argument, against the claim that humans are intrinsically endowed with intentionality and everything else is not. French has perhaps made himself vulnerable to criticisms coming from this direction because he has never fully explained and defended his claim that corporations are intentional systems. Specifically, after laying out a functionalist account of intentional systems, I will take up Thomas Donaldsons arguments against corporate personhood. Donaldsons main criticism of Frenchs formulations are useful because they are clear and concise, and reflect the intuitive objections held by many casual observers of this debate. I am hesitant to be dogmatic about criterially driven notions of what constitutes a moral person. Nonetheless I argue in section III that when two general characteristics for personhood are added to Frenchs thoughts his claim that corporations are moral persons becomes much stronger. In reaching these two characteristics I enlist the aid of Daniel Dennett for explaining a functionalist view of intentionality.

89

intelligence or consciousness or rationality or mind as will suffice to account for its behavior (Dennett, 1978). The rule of parsimony is necessary for such attributions because otherwise incorrect assumptions can create and magnify misperceptions on the part of the observer. Even given this minimalist account of rationality and consciousness, when it is combined with Dennetts functionalism it yields some interesting conclusions for the intentional character of corporations. Dennett treats people, and as will be seen, much else, as intentional systems (Dennett, 1971). By intentional system Dennett means,
the concept of a system whose behavior can be at least sometimes explained and predicted by replying on ascriptions to the system of beliefs and desires (and hopes, fears, intentions, hunches, . . .). I will call such systems intentional systems, and such explanations and predictions intentional explanations and predictions, in virtue of the idioms of belief and desire . . . (1978, p. 3).

II. Dennett and the predictive strategy II. of intentionality Daniel Dennett tenaciously holds to two things in his writing: parsimony and functionality. For reasons of prudence we must, as theorists, be as parsimonious as possible in the attempt to explain human (and, as we will see, nonhuman) behavior. In a number of articles Dennett appeals to Lloyd Morgans Canon of Parsimony, which holds that one should attribute to an organism as little

And an intentional system is precisely the sort of system to be affected by the input of information . . . (1978, pp. 247248). But as Dennett makes clear, an intentional system is not so called because of anything it intrinsically has (like beliefstates, language, cognition, etc.). In fact, the notion of consciousness at least on this point is not an important one for Dennetts formulation. Dennett sees the idea of consciousness, as it is treated by theorists who believe in intrinsicality, as getting in the way of useful understanding about intention (e.g., Nagel, 1974, 1979, 1986; Searle, 1980, 1983). Dennett, by his constant reference to machines, and the language of engineering and design, is on one level looking to demystify the notion of intending and belief (1991, pp. 259262). He feels that he has the cure for the Cartesian hangover which has caused so much bad talk about human essence, original intention, and the like. So it is not surprising that Dennett will immediately let the reader know that intentional systems are not determined criterially rather the determination of something as an intentional system is made on the basis of utility. This, of course, requires such a determination to be in

90

William G. Weaver intentional stance, also varies with education, environment, etc. It is possible, for example, for an extremely unmechanical person to take an intentional stance toward a relatively simple system. The key to the design approach is that the elements get stupider as one goes down. In theory one could take a design down to the locus of a single logical operation. Dennett relates this idea in the following way:
[The] first and highest level of design breaks the computer down into subsystems, each of which is given intentionally characterized tasks; he composes a flow chart of evaluators, rememberers, discriminators, overseers and the like. These are homunculi with a vengeance; the highest level design breaks the computer down into a committee or army of intelligent homunculi with purposes, information and strategies. Each homunculus in turn is analysed into smaller homunculi, but, more important, into less clever homunculi. When the level is reached where the homunculi. When the level is reached where the homunculi are no more than adders and subtractors, by the time they need only the intelligence to pick the larger of two numbers when directed to, they have been reduced to functionaries who can be replaced by a machine (1978, pp. 8081).

the hands of a third-person observer. As Dennett writes, a particular thing is an intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior (1978, pp. 34). And on Dennetts view, there is nothing that requires the ascription of intention to be limited to humans. The ascription of intention has nothing to do with intelligence or creative capacity; it has to do with the complexity of the system one wishes to talk about. Obviously, depending upon a persons training, background, education, and other environmental factors, what one person regards an intentional system another would not. As Dennett says, [a]ll that has been claimed is that on occasion, a purely physical system can be so complex, and yet so organized, that we find it convenient, explanatory, pragmatically necessary for prediction, to treat it as if it had beliefs and desires and was rational (1978, p. 8). In what follows we will investigate three ways in which Dennett says that we approach organized collections of material and people.

Dennetts trinity Dennett relates what he calls three stances which a person can take toward any system which can be said to have a behavior. Behavior as it is used here is extremely loose and covers just about all systems with an interactive feature from electric eyes to persons. Dennett calls these the design, physical, and intentional stances. All of these stances are strategies for coping with system behavior. Determining which stance to adopt is often unconscious, but, of course need not be so. And deciding which stance to adopt in a given situation is based on the belief about which one will yield the most accurate predictions about the subject systems behavior. First is the design stance. This stance is taken by people attempting to predict the behavior of mechanical objects (1978, p. 4; 1991, p. 276). It is the how things work approach. An engineer, for example, will take a design stance when talking about a thermostat. The engineer knows fully the possible causal outcomes of all of the mechanisms operations. This stance, as with the

It is only through the large combination of possible outcomes created by the joining of lots of logical operators that a machine is said to be an intentional system. At some level it will become impractical for even the most skilled engineer to maintain a design stance toward this collection of stupid units. The design stance leads into the second stance discussed by Dennett, the physical stance. The physical stance is one taken towards a system that is in some way dysfunctional (1978, p. 4). It is also the stance we take when attempting to predict malfunctions of systems. This stance, while it may be unusual to do so, may also be taken toward humans. Doctors and psychotherapists generally learn to reason in this way about their patients; as do weight trainers and performance experts for athletes. Also, the physical stance may be used to determine a systems behavior through an analysis of the physical makeup of that system. As Dennett explains:

Corporations as Intentional Systems


The chemist or physicist in the laboratory can use this strategy to predict the behavior of exotic materials, but equally the cook in the kitchen can predict the effect of leaving the pot on the burner too long. The strategy is not always practically available, but that it will always work in principle is a dogma of the physical sciences (1987, p. 16).

91

When systems are so operationally complex that it is impractical for the observer to employ the design or physical stance in the attempt to predict system behavior, the observer will employ the intentional stance. This stance is so prevalent and automatically assumed in the normal activities of life that we take it for granted. Much of our waking existence is taken up with predicting the behavior of intentional systems. This intentional stance is not a posited, or fictional occurrence we use it all the time. Neither is the application of this stance theoretical; much less a theory it often requires no thought at all. The intentional stance is a strategy for predicting the behavior of other complex systems and neither depends upon a notion of any internal qualia of the observed system nor conjectures if such a system really thinks, feels, has a language, etc. Such concerns with internal states are superfluous to the intentional stance. All that is important is whether or not the ascription of beliefs and reason to the observed system yields desirable results for the observing/predicting party. We must get over the urge, Dennett tells us, to continue to worry about the insides of such systems, the urge to question whether or not there is a parallel internal state to the external rational behavior we are counting on in our predictions. As Dennett writes,
[i]t is not that we attribute (or should attribute) beliefs and desires only to things in which we find internal representations, but rather that when we discover some object for which the intentional strategy works, we endeavor to interpret some of its internal states or processes as internal representations . . . What makes some internal feature of a thing a representation could only be its role in regulating the behavior of an intentional system (1987, p. 32).

Dennett is saying that the only useful way to think of intention is from the external, thirdparty perspective of functionality. Dennett is dogmatic in his defense of intention as a thirdparty prediction of some things response to some environmental stimulus (1987, pp. 1522). The key to this external view is the ascription of rationality to the system in question, and if intentionality is from the stance of the observing party, then rationality must also be a third-person construction. Ascriptions of rationality do not imply that the intentional system under observation be language using or even intelligent; for Dennett, rationality is part of an equipment concerned with prediction. And he claims that,
all there is to being a true believer is being a system whose behavior is reliably predictable via the intentional strategy, and hence all there is to really and truly believing that p (for any proposition p) is being an intentional system for which p occurs as a belief in the best (most predictive) interpretation (1987, p. 29).

In short, beliefs and intentions are not the sort of things that are helpful to think of as belonging to humans and only to humans. The ascription of rationality also means that we must talk as if intentional systems have belief states in the way we talk about people having belief states. Here Dennett is trying to break down the powerful background view of most readers that the world is severable cleanly along the lines of what is and is not a person. Dennett thinks that the belief that humans have beliefs and everything else does not is an intellectually harmful vanity. As Dennett says,
[t]he assumption that something is an intentional system is the assumption that it is rational; that is, one gets nowhere with the assumption that entity x has beliefs, p, q, r. . . unless one supposes that x believes what follows from p, g, r . . .; otherwise there is no way of ruling out the predictions that x will, in the face of its beliefs p, g, r . . . do something utterly stupid, and, if we cannot rule out that prediction, we will have acquired no predictive power at all (1978, p. 11).

The idea then is to problematize the belief that rationality and personhood are only entailed by

92

William G. Weaver many other intentional systems do not, and that we, as wetware, have an intentionality different in kind, rather than degree, from hardware systems or corporations. But as a pragmatist and, therefore, a functionalist in the broad sense of the term, I want to stick with what appears to be least theoretically attenuated from the activity under consideration.

biological humans. Since most humans have a language, that makes us the most protean and powerful of intentional systems, but communication . . . is not a separable and higher stance one may choose to adopt toward something, but a type of interaction within the intentional stance (1978, p. 242). And [r]eason, not regard, is what sets off the intentional from the mechanistic; we do not just reason about what intentional systems will do, we reason about how they will reason (1978, p. 243). On this view, then, rationality is always a third party ascription, there is nothing useful to be thought of as reason that exists in itself, or is criterially determined. Rationality need not be throughout of as a core humanality around which swirl supplemental, contingent items. It is an evaluation of system behavior, and persons just happen to be the most powerful and frequently encountered intentional systems. And as such, we persons generally grant, without reflection, the rationality of each other. Sometimes this charity proves unwarranted, and we are forced to adopt a predictive strategy other than the intentional stance. Untreated paranoid schizophrenics, or schizophrenics resistant to medication, often force a physical stance, an attempt to account for a failure of rationality the failure of prediction of behavior under the intentional stance based on a system malfunction. So far we have come to an understanding of rationality and intention which requires no theorizing or investigation of mental states, core qualities, or of what demarcates humans from other intentional systems. We have no criteria of the rational, but we do have a strategy based on prediction which is a powerful evaluative device and information-gathering tool of other intentional systems. We need not worry about whether or not intentional systems possess intrinsic intentionality, whether or not they really have beliefs, for the results of that investigation even if it could be resolved carry no consequences for prediction or evaluations of rationality. All we need be concerned about is if the ascription of intentional idioms to a particular intentional system works, or helps us to make decisions about the system. It may be that humans do have intrinsic intentionality and that

III. Corporations as intentional systems From the account above, it should be clear that rationality is not static, nor does it have a life of its own; it is completely beholden to the efficacy of prediction. As the context of prediction changes so will rationality. So there are a multitude of rationalities, but no central or privileged rationality. And what we predict of an intentional system is based on context and what we expect from members of a particular group under similar circumstances. Such predictions can take the form of what would be expected of Americans in circumstance x, or the very narrow expectation of what a gene therapist scientist is expected to do in the lab under situation y. Prediction and the concomitant evaluation of rationality is based on education, experience, and intelligence. And of course we can find people to be acting irrationality under certain contexts without forcing us to forego the intentional stance. For example, a judge who resorts to her own personally held religious doctrine to decide a case may, from the perspective of law practice, be acting irrationally, but her status as an object of the intentional stance is not in jeopardy. She may not be fit to be a judge, but it is likely that over a range of legal and nonlegal contexts her behavior will confirm predictions at a rate not unlike other persons in the same culture. Also, in assuming the intentional stance we come presumptively preconfigured with respect to expectations, since what we expect or predict out of other intentional systems depends on what we have learned and observed from individual behavior within the context of culture. I say presumptively preconfigured because, of course, we can adapt, with training or observation, to new rationalities and expectations. This very

Corporations as Intentional Systems ability to adapt is what also creates the distance from one rationality to another, and it is not misleading to say that rationalities are group adaptations. French starts us in the right direction when he says that it is the minimal requirement for a corporation to be a moral person that it is an intentional actor. But of course, he does not mean to imply that intentionality is a sufficient condition for moral personhood. There are all kinds of intentional actors that are not and cannot be moral persons. Tigers, chess-playing computers, and nuclear reactors all warrant, on some occasions, that we adopt the intentional stance in order to predict their activity. But none of these things can reasonably be thought of as potential or actual moral persons. Under Dennetts analysis, tigers, chess-playing computers, and nuclear reactors are intentional systems and per force, for Dennett, are intentional actors. But characteristics beyond those held by these items seem necessary for an intentional system to be a person.

93

but it is unclear that this lack of a brain also means that corporations lack minds. If we side with Dennett and mean by mind the sheer organized combinatorial complexity of a particularly system, then corporations do indeed have minds. Dennetts point is that the complexity of human minds gives rise to the belief that there is a difference in kind rather than degree between humans and other intentional systems. He disagrees with that belief, and, as he puts it, My view is that belief and desire are like froggy belief and desire all the way up . . . We human beings are only the most prodigious intentional systems on the planet . . . (1987, p. 112). In analyzing the nature of consciousness, Dennett adopts a design stance and sees the brain as consisting of many stupid homunculi, each of which is dedicated to some specific problemsolving task. These homunculi are grouped together and forced into communications patterns by what Dennett and Richard Dawkins have termed memes; more or less identifiable (and complex) cultural units (1991, p. 201). Examples of memes are wheel, vendetta, calendar. Consciousness is
itself a huge complex of memes (or more exactly, meme-effects in brains) that can best be understood as the operation of a von Neumannesque virtual machine implemented in the parallel architecture of a brain that was not designed for any such activities. The powers of the virtual machine greatly enhance the underlying powers of the organic hardware on which it runs (1991, p. 210).

Do corporations have minds? Many thinkers believe that corporations can be said to have intentionality in only a trivial and insubstantial sort of way. On this view, humans have original or intrinsic intentionality, while all other things spoken of as intentional actors have derived intentionality. The distinction for these critics that is crucial is that there is something going on in a human brain, some irreducible event, which does not occur in intentional actors with derived intentionality. Coin readers in soft drink machines, for example, have only derived intentionality: its intentional state to read coins accurately is completely beholden to the desires of items with original intentionality (humans). It would be incorrect to talk about coin readers as things with intentional states unless one made reference to the intentions of its human creators. And nothing without irreducible mind-stuff can have original intentionality. Obviously, corporations do not have central brains where electro-chemical reactions occur,

Assuming the defensibility of Dennetts claims, it is unclear how one would exclude, on his terms, corporations as conscious entities. Corporations are just as subject to meme-effects as are humans. They have identifiable personalities and are driven to certain conclusions and actions by the acceptance of some memes as important for corporate identity and the rejection of other memes as inimical to that perceived identity. Take the meme liability exposure, a meme that no corporation rejects or ignores if it wants to survive very long. The creation of this meme itself assumes the adaptive rationality of corporations, for over the last century this meme has come to stand for penalty induced corporate

94

William G. Weaver employees. Corporate grammar and syntax manifest themselves not only through teamwritten items or board directives, but also through informal channels not reflected in the official CID Structure and through the personality of the corporation. Even single-authored items in a corporation are likely to exhibit corporate syntax, for one intentional system (the human author) will probably be concerned with sublating her own idiosyncrasies, beliefs, and desires to those of the corporation as she perceives them. Any corporate lawyer who has ever authored a liability exposure study, or any management staff who has written up a buying or product suggestion probably knows precisely what it means to become part of the corporate grammar. Human members of corporations learn to speak the corporate language, they learn facility with a discursive practice which communicates corporate intentions, beliefs, hopes, etc. Second, besides the capacity for acquiring and using language in an original or idiosyncratic way, an intentional system must be adaptive in order to be held a moral person. It must be able to function in a number of different rationalities. In a major sense the root of moral condemnation is founded in the tacit understanding of adaptivity. When intentional systems are unable to adapt their behavior we generally do not hold them to be morally responsible for their actions.1 The perceived capacity for adaptation is necessary to give rise to moral judgment. Of course this has not always been the case. Under the ancient legal doctrine of deodand many items were held morally culpable for their behavior. In Great Britain trees, horses, dogs, etc., were sometimes held accountable for harm that they caused. But even on the contemporary view there seems no reason to suspect that corporations do not meet the adaptivity requirement. Corporations do travel in and out of a great number of rationalities, just as other persons do. Stanley Fish might alternatively say that corporations function in a variety of interpretive communities (Fish, especially chs. 14 and 15).

changes of behavior; changes of behavior viewed as socially desirable. But the meme-effects of liability exposure on corporations varies widely, largely depending on the corporate personality. As language users, corporations are culturally conditioned in the same way people are. French is right to say that at a minimum a moral person must be an intentional actor, but this claim can be made more convincing by supplementing it with several other claims. For while intentionality is a necessary criterion for moral personhood it is not a sufficient condition for moral personhood.

Corporate grammar and adaptability I would add two further conditions for membership in the class of moral persons. First, our tremendous capacity for intentional action is made possible by language, and humans, because of language, have a broad-ranging and subtle intentional complex. What is obvious but sometimes unappreciated is that corporations, no less than humans, are language users. They are not only affected by the input of information (as must be all intentional systems) but they also talk back. Corporations not only use information, they also make information, they attempt to persuade, manipulate, inform, depress, uplift, debate, and debunk other intentional systems and their actions. All the communicative capacities available to a normal human are just as available to corporations. Perhaps corporations have even greater capacity since they are not as subject to cultural and linguistic limitations as the average human is. If we wanted to seize on the crucial characteristic for moral personhood it may not be intentionality but language-using capacity. For if a system has a language, then ipso facto it must be an intentional system and also be adaptable to multiple rationalities. Each corporation has its own idiosyncrasies of grammar and syntax just as each human does. This corporate syntax is not reducible to the human members of the corporation, just as corporate actions have been shown by French and others to not be reducible to actions of its

Corporations as Intentional Systems IV. Thomas Donaldsons criticisms of IV. corporate moral personhood Keeping in mind the functionalist account of intentionality discussed here as it relates to corporations, we can now look at some specific criticisms of corporations as intentional actors. Thomas Donaldson in Corporations and Morality (1984) discusses the corporation as a moral person, and his criticisms of this approach are pointed, concise, and well made. Donaldson attacks the idea of corporations as intentional actors in several ways. First, he writes,
In order for corporations to be agents, the Moral Person view holds that they must satisfy the definitions of agency, or in other words, be capable of performing intentional actions. But can corporations really perform such actions? Flesh and blood people clearly perform them when they act on the basis of their beliefs and desires, yet corporations do not appear to have beliefs, desires, thoughts, or reasons (1982, p. 21).

95

membership to nonhuman systems which possess near or like capacity. Further, corporations arguably share with humans the vulnerability to humiliation, a characteristic which some contemporary theorists make much of (e.g., Rorty, 1989). Corporations are not merely language users that respond to the input of information, they also experience the full range of emotions. And perhaps humiliation is the emotion most closely thought of as available only to humans, for it implicates an entire panoply of cultural subtlety. It is beyond the range of this essay to attempt to prove this point, and in any case such a proof is not necessary for my claims. But nothing tells me that corporations cannot suffer from humiliation. Second, Donaldson writes that:
Consider the analogy of a game. In games, the rules determine which actions count as legitimate moves, and in corporations certain rules determine what counts as, say, a decision by the board of directors. But the rules of a game fail to tell us what the game itself intends in fact, it makes little sense to say that the game intends anything and one can argue that the same is true for corporations. If corporations are made up of rules, policies, and power structures, then we can tell what counts in the context of those rules, policies and structures; but we cannot tell clearly from these what the combined rules, policies, and structures themselves intend (1982, p. 22).

Here Donaldson is uncritically assuming the position held by Nagel and Searle. He believes that there can be no minds without brains. But the functionalist might respond by pointing out that we often treat corporations as we do other people. That is, we adopt the intentional stance toward corporations because it is the best predictive strategy of what a corporation will do. We sometimes think that corporations will think that we think that they think about x. In other words we not only think of corporations as intentional systems, we also think of them as so complex that it is best, for our predictive strategies, to think of them as things with minds. The combinatorial complexity of corporations may not rival that of the human mind, but it rivals everything else we know of in nature. If Dennett is right, if combinatorial complexity and meme-effects are two of the keys to understanding what we call consciousness, then it is arguable that corporations have minds. If it is like froggy belief and desire all the way up, if what sets humans apart from other intentional systems is their capacity for complexity then we should be willing to see personhood as open for club

There are a number of problems with this statement. First, it should be obvious that corporations are not merely collections of rules, policies and structures, any more than humans are collections of neurons, musculature, bone mass, internal organs, etc. If I am right in claiming that corporations have language and minds, then the design account of corporations given by Donaldson here no more gets at the nature of corporations than the design account of a human body gets at the nature of what it is to be human. At times, in certain contexts, as discussed, the design and physicalist stances are appropriate predictive strategies. But these stances no more exhaust the possible explanations of what a corporation is than they exhaust the possible explanations of what a human is.

96

William G. Weaver which overlap to a substantial degree. But this assumption seems unlikely to be true. In any event, Donaldson has provided us with no situation specific enough to be testable. Finally, Donaldson claims that
[The Moral Person] view assumes that anything which can behave intentionally is an agent, and that anything which is an agent is a moral agent. But some entities appear to behave intentionally which do not qualify as moral agents. A cat may behave intentionally when it crouches for a mouse. We know that it intends to catch the mouse, but we do not credit it with moral agency (though we may object on moral grounds to its mistreatment). A computer behaves intentionally when it sorts through a list of names and rearranges them in alphabetical order, but we do not consider the computer to be a moral agent. Perhaps corporations resemble complicated computers; perhaps they, according to complicated inner logic, function in an intentional manner but fail altogether to qualify as moral agents. One seemingly needs more than the presence of intentions to deduce moral agency (1982, p. 22).

Second, Donaldson assumes that the source of intentional action must come from the collection of rules, policies, and procedures that help comprise a corporation. But this seems as helpful as saying that the source of intention for speakers of English are the rules of grammar governing that language. Such an observation, of course, does not lead to the conclusion that since no such intention can be pulled out of these rules of grammar, then speakers of English must not be intentional systems. Donaldsons mistake is to think that the source of intentionality in corporations must be substantially or only found in its policies, procedures, rules, and the like. But just as with humans, corporate sources and causes of intentionality are broad-ranging and subtle; limited only by eithers linguistic capacity. Third, Donaldson seems to be using the word intention in a general, abstract way, and then saying that since corporations do not have a general intention, they must not be eligible for moral corporate personhoood. Two observations can be made about this claim. First, if we reflect back on Dennett we will see that the intentional stance is not about abstracts, it is about action and prediction under a specified set of circumstances. It is situational, and it is not obvious that corporations are generally any worse than humans in effectively assuming on intentional stance given a particular set of circumstances. Donaldsons claim is akin to the ancients question of How shall we live? or the theological query as to the reason for the existence of humans. If Donaldson means to say that corporations cannot and do not wonder about things in these big ways as do humans, we may still grant this point yet maintain that this does not mean that corporations cannot be moral actors. If this is what Donaldson believes, then he would make these large wonderings a crucial criterion for moral personhood. This may be true, but it is not obviously so. Donaldson needs to argue for this point, not simply point it out as a difference (one that I am not sure I would concede to at any rate) between humans and corporations and then continue as if the point is self-executing. Second, he assumes that if we ask humans about what a game intends, or what is its purpose, or why do we play it that he will get responses

But there seems no apparent reason why we should think of intentional systems as no more than a coherent collection of parts and then use that observation to demarcate the person from the nonperson. Here the functionalist would simply agree with Donaldsons statement, but also add that the human brain, like corporations, also resembles a complicated computer. And both of these types of computers are subject to memeeffects and conceptual influence. To think that corporations are reducible to their parts, but that no such reduction is possible for humans is to replicate the thinking of those who believe humans to be the only holders of intrinsic intentionality. Such a position is a highly respected one in philosophy, but as noted it is not without a large and vocal opposition. Donaldson cannot simply recast the conclusions of this argument and expect those conclusions to be held self-evidently true. To put this another way, Donaldson assumes a design stance toward corporations and believes that that exhausts all the useful explanatory space necessary to understand corporate behavior.

Corporations as Intentional Systems Conclusion Donaldson and others argue that more than intentions are needed to deduce moral agency. Throughout this essay I have happily granted that point and attempted to show what in addition to intention is required to hold something a moral agent. It seems plausible to think of corporations as language users and as things which have minds. But from this observation we should not go on to ask the further questions of whether or not corporations really have a language or really have thoughts and beliefs. From Dennetts functionalist perspective these are precisely the sorts of questions we should avoid. These questions lead us into irresolvable debates about the irreducibility of mind to brain or carry us off toward investigations into the inner lives of persons. The question I have tried to answer here is not whether functionalism is right in that it is something that can be made true by a somehow revealed nature of corporations. Rather, it is whether or not functionalism can plausibly support a claim that corporations are moral actors. As argued above, it seems that while we cannot deduce moral agency out of intentional activity, we can deduce moral agency out of intentional systems that are language users and are adaptable to multiple rationalities. Corporations seem to meet these requirements.

97

Note
Of course this claim is generally handled under theoretical treatments of free will, but I will stick with the terminology used here because free will conjures up a host of concepts and thinkers that I do not mean to implicate in this article.
1

Dennett, D.: 1978, Brainstorms: Philosophical Essays on Mind and Psychology (MIT Press, Cambridge, MA). Dennett, D.: 1987, The Intentional Stance (MIT Press, Cambridge, MA). Dennett, D.: 1991, Consciousness Explained (Little, Brown and Co., Boston, MA). Donaldson, T.: 1982, Corporations and Morality (Prentice-Hall, Inc., Englewood Cliffs, NJ). Donaldson, T.: 1986, Personalizing Corporate Ontology: The French Way, in H. Curtler (ed.), Shame, Responsibility and the Corporation (Haven Publishing Corp., New York), pp. 99112. French, P.: 1979, The Corporation as a Moral Person, American Philosophical Quarterly 16, 207215. French, P.: 1984, Collective and Corporate Responsibility (Columbia U.P., New York). Garrett, J. E.: 1989, Redistributable Corporate Moral Responsibility, Journal of Business Ethics 8, 535 545. Nagel, T.: 1974, What is it Like to Be a Bat?, Philosophical Review 83, 435451. Nagel, T.: 1976, Mortal Questions (Cambridge U.P., Cambridge). Nagel, T.: 1986, The View From Nowhere (Oxford U.P., Oxford). Pfeiffer, R. S.: 1990, The Central Distinction in the Theory of Corporate Personhood, Journal of Business Ethics 9, 473480. Rorty, Richard.: 1989, Contingency, Irony and Solidarity (Cambridge U.P., Cambridge). Searle, J.: 1980, Expression and Meaning (Cambridge U.P., Cambridge). Searle, J.: 1983, Intentionality: An Essay in the Philosophy of Mind (Cambridge U.P., Cambridge). Velasquez, M.: 1983, Why Corporations Are Not Morally Responsible for Anything, Business and Professional Ethics Journal 2, 118. Werhane, P.: 1989, Corporate and Individual Moral Responsibility: A Reply to Jan Garrett, Journal of Business Ethics 8, 821822.

References
Dennett, D.: 1971, Intentional Systems, Journal of Philosophy 8, 87106.

Department of Political Science, University of Texas as El Paso, El Paso TX 79968, U.S.A.

S-ar putea să vă placă și