Thursday 7 March 2013

Bishop and Trout's 'Strategic Reliabilism'


                                                   Excellent Reasoning:
Imagine What Fantastic Reasoning Would Look Like
           In this essay I will provide an analysis of chapter five and six of Bishop and Trout’s[1] Epistemology and the Psychology of Human Judgement. I will first give a short context to just how the reason-guiding principles of ‘robust reliability’, ‘costs and benefits of excellent reasoning’, and ‘epistemic significance’, presented in these chapters fit within strategic reliabilism. I will follow that with a detailed analysis of the aforementioned chapters, and conclude with a discussion concerning the potential implications of Bishop’s arguments on notions of artificially augmented cognitive capacity and the utility of standards analytic epistemology (SAE) I will show that the principles presented in these chapters contribute to the work’s larger goal of providing practical advice to cognisors such that they can improve their reasoning as well as forming a tangible opposition to SAE.
In chapter five and six, Bishop introduces the two final components of what he argues are the benefits of strategic reliabilism. The components are: ‘robust reliability’, ‘tractability’, and ‘epistemic significance’. These relate to the larger theme of reasoning excellence insofar as, according to Bishop, they would lead to an efficient, reliable, and relevant epistemology; what Bishop and Trout are banking the reader assumes constitutes the virtues of excellent reasoning. Robust reliability refers to the ability of a reasoning strategy to be applicable in many particular cases; the strategy must work ‘on a wide-range of problems’. (55 Bishop & Trout) For instance, the MMPI might lack robust reliability to a degree, insofar as it does not perform as reliably in relation to determining the psychological status of African Americans and Mexican Americans; African Americans perform 5 points lower of the MMPI and thus the reasoning strategy loses a degree of robust reliability.[2] As I will illustrate in the body of this essay, tractability refers to the way in which a reasoning strategy can be assessed in regards to the cost of a strategy and the benefit it is directed towards, while epistemic significance refers to the need of excellent reasoning to include a way of focusing on beliefs that are relevant to the cognisor; epistemology is not just about getting true beliefs, it is about getting true beliefs that matter.
As stated above, chapter five is concerned with the second virtue of excellent reasoning: the ‘efficient allocation of one’s cognitive resources’. (79) Bishop commences the discussion by highlighting what he perceives as the shortcomings of ‘Standard Analytic Epistemology’ (SAE). Bishop argues that any epistemological account must ‘recognise that we are limited creatures, and such we have to make choices about how to spend our cognitive resources’. (79) While SAE, Bishop argues, might attempt to give us the formal definition of ‘Knowledge’ or what it means to say that one has a ‘justified true belief’, SAE fails to tell people just ‘which of the infinitely many justified beliefs one ought to adopt’; Bishop is longing for an epistemology with a normative edge to it. (79) While Bishop posits that SAE would respond by saying the concerns Bishop has raised so far are pragmatic, he argues that any epistemology that attempts to ‘guide reason’ must take into account resource allocation. Strategic reliabilism, Bishop argues, has the above-mentioned ‘edge’ insofar as it takes cognitive resources (and relevance, as we will see later) seriously and attempts to provide a framework for considering resource allocation; a cost-benefit approach.
            According to Bishop, cost-benefit analysis (CBA) is understood generally as ‘a reasoning strategy that permits us to estimate the desirability of various tradeoffs available to us’. (80) The most common example of this is the way in which businesses attach a dollar value to costs and benefits to weigh potential choices and ensure that they choose the one that runs parallel to their priorities. Bishop is clear from the get go that cost-benefit analysis is plagued with theoretical problems. Bishop initially raises two common objections to CBA: firstly, it relies on money as common value, and secondly, that certain values are incommensurable and thus may not be compared against each other. Bishop argues that despite these accepted theoretical shortcomings, CBA can remain an invaluable tool insofar as it helps a cognisor assess her priorities in relation to her values. Bishop continues by outlining two kinds of ‘flawed’ CBA which he argues remain useful: 1) an ‘incomplete’ CBA, and 2) ‘unreduced’ CBA. (81) An incomplete CBA is one that ‘focuses on only a subset of the values at stake in the decision’, and an unreduced CBA does not attempt to reduce the desired factors into a common value. (81)
Bishop proceeds by giving examples of how a flawed CBA can be valuable. The first of these examples is one relating to retirement packages. Bishop argues that a ‘surprising amount of people’ reject retirement packages[3] even though the benefits are substantial and most likely reflect the long-term goals or values of the people rejecting them; they might, for example, lead to intangible results like familial flourishing. Bishop argues that, ‘while these costs and benefits cannot be reduced’, insofar as money and human flourishing are incommensurable, one could definitely agree in the benefits and even ‘explicitly reject the values implicit in their actual decisions’ after carrying out a CBA. (82) The second example concerns attempts in the past to weigh the ‘incommensurable values’ of human beings and monetary value. For instance, a CBA was carried out in the time of plague to determine whether the cost of protecting citizens from the plague was greater or lesser than the cost of those citizens perishing. A similar analysis was carried out in the 1970’s in which businessmen attempted to apply a monetary value to human beings even while acknowledging that no objective value could be placed on the ‘intangibles of life’. (82) While Bishop argues this is a ‘crass’ method of weighing values, and that he shares the common wish that these decisions had been made by virtue of a moral motivation, he does argue that, even though these CBA were flawed, they were invaluable insofar as they helped ‘dramatise the opportunity cost of a favored course of action’. (83) While these values are ‘radically’ incomplete or irreducible, they still help assess costs and benefits and there association to our preferred outcomes.
Before moving to the relation between CBA and epistemology, Bishop considers the role of CBA in psychology. Bishop argues that CBA is a repeated feature of the perceptual and cognitive aspects of contemporary cognitive psychology; ‘psychologists often explain successes (...) in terms of specific allocations of attention and memory across perceptual modalities and cognitive capacities’. (84) The example Bishop features is that of reflexes and more specifically the ‘ducking reflex’. (84) Bishop argues that the ducking reflex is seen as a series of ‘cognitive instantiations of a complex cost benefit analysis’; the cost and the benefits of either ducking or not ducking are weighed in a ‘complex payoff matrix’. (84) Another example of the relevance of CBA for psychology is the amount of data that shows just how unreliable multi-tasking is; this points to the need for an effective allocation of resources to matters of relevance insofar as one wishes to perform tasks favourably. One can overcome, or at least make the best of their cognitive limitations by taking to heart the results of ‘an analysis of the costs and benefits of the various distributions of cognitive resources’. (85)
Bishop moves on to the task of incorporating the data of the psychology of CBA into epistemology by addressing the ways in which reasoning could involve ‘costs and benefits’. (85) Bishop argues that this must take into account the epistemic benefits, the cognitive costs, and the cost benefit imperative. Bishop puts forth what he calls a common conception: ‘the benefits of reasoning will be the function of the accuracy of our judgements’. (85) Bishop argues this is simplistic insofar as it would be a ‘complex function’: it wouldn’t simply be beneficial to have a large collection of trivially true beliefs, but instead to have true beliefs concerning what one would consider significant. For instance, while the above mentioned criterion of epistemic benefits would lead a jury member in a municipal court of law to state ad infinitum the amount of people in the courtroom, as this would secure a plethora of accurate beliefs, it is clear that the relevant epistemic benefit of reasoning in this case would be something involving judging likeliness that defendant D is guilty or not, or will re-offend or not. This leads Bishop to state that an accurate reasoner is not what one would necessarily call an excellent reasoner; an excellent reasoner excels in relation to problems of ‘significance’. (86) Bishop illustrates this through an example involving a dystopian society; while it is a question of accuracy as to whether someone appears to be carrying a weapon or not, it’s a matter or significance as to what the cognisor should assume given the circumstance; act as if x (x=subject is concealing a weapon), even if ~x seems true, given the significance of what’s at stake. The notion of significance is crucial insofar as Bishop states an epistemology must ‘begin by noting what sorts of problems tend to be highly significant for people’ and only then can one develop reasoning strategies that are ‘tractable and robustly reliable’ to address these concerns. (87)
Bishop turns to the question of cognitive costs by throwing away one of the assumptions he has carried throughout the preceding chapters, namely: resources are of a fixed value and are ‘fungible’. (87) Bishop argues that there are in fact many types of resources; time, attention, short and long term cognitive capacity etc. Bishop points out that different reasoning strategies call upon different resources. Bishop raises a doubt as to whether these values could ever be ‘reduced to a common value’, yet proposes a tentative empirical solution to this issue. (87) In what he calls an ‘imperfect stand-in’, Bishop argues that the elapsed time a decision making process lasts can give an insight into the real cognitive cost of reasoning. (88) Bishop is quick to highlight the two obvious virtues of such an account of CBA. For one, insofar as the cost of reasoning is measured by elapsed time, the cost of reasoning becomes an empirically determinable value to some extent; one could assess the costs by virtue of comparing the slope of two strategies on a graph. Secondly, the strategy takes significance to be a surrogate for epistemic benefits; matters of significance are seen as the benefits of the strategy.
 Bishop raises a concern that could be thwarted towards his innovation; what’s the use? Insofar as SPR’s have been successful without these innovations, why is it necessary to include a more sophisticated representation, or guess, as to how SPR work at this juncture? Bishop argues that displaying the relation between the cost and benefits graphically can in some sense ‘help bring a certain kind of discipline to reasoners’. (90) For one, the graphic representation of the cost and benefits of reasoning can provide an intuitive way for a cognisor to understand just how efficient it would be to stay faithful to an SPR, and just how inefficient it would be stray away from it based  on ‘feelings’. (91) If one familiarises herself with the costs and benefits of a reasoning strategy and sticks to it in principle, Bishop argues this person will develop an epistemic discipline and will become a better reasoner ‘overall’. (91) Bishop argues that this is already tacitly assumed in some sense by some aspects of society. For instance, pilots have been trained to follow impersonal tools and not the intuition of vision as primarily reliable when flying; the later can lead one to thinking they are ‘flying right side up’ when they are actually ‘upside down’, when the former, contrastingly, ensures the benefit of knowing a significant truth. (91)
Chapter six features the final virtue of the three virtues at the heart of Trout’s strategic reliabilism: ‘epistemic significance’. Trout begins the chapter by recapitulating the idea that accurate truths are not necessarily significant truths. For instance, while one could assure a collection of trivially true beliefs by stating ‘how many Goodyear blimps are in his field of vision every second of the day’, this is surely not, Trout argues, what one would want to call and ‘excellent reasoner’; ‘excellent reasoners reason reliably to significant truths, not just any old truths’. (93) Trout argues that ‘significance is nonaccidentally related to the requirements of human well-being’; Trout is claiming that the virtue of significance can in some sense be understood in relation to the basic and complex problems human beings generally face.  While Trout acknowledges that the notion of human well-being may seem vague, and in fact there is as of now no solid empirical basis for assuming this as obvious, he claims there are fields in science (moral psychology) that are attempting to get closer to elucidating the empirical basis for such a notion.
Insofar as significance is inadequately understood, Trout moves to something potentially easier to manage: ‘the role of significance in strategic reliabilism’. (94) Trout raises certain problems concerning the relationship between what is significant to the cognisor’s reasoning strategy and the limited knowledge that cognisor may have about the ‘idiosyncrasies’ of their situation. For example, while a cognisor could assume that pondering over the mind/problem is of utmost epistemic significance at a given time, what may actually be significant is the train speeding towards their car. Furthermore, if one is given a task by their superior and one decides to perform it in the way they see best, this could be an issue if what they see as the best work is incompatible with what the superior wants or expects. Is the notion of significance Trout is arguing in favor of going to shape strategies that are useful in every situation a cognisor could find themselves in? Trout argues that the notion he is arguing for is not a ‘recipe book’ of strategies for any given situation, but instead general advice for cognisors.  (94) What does Trout mean by general advice? Trout claims that ‘if we concluded that causal matters are significant, then we could say devoting resources to understanding cause would be good general advice’; Trout is arguing that if many problems share a certain characteristic one could in principle improve reasoning overall by improving one’s reasoning aptitude in relation to that characteristic.   (95) 
            Trout moves forward by presenting what he calls ‘the fundamental difficulty’ of any account of significance: the ‘thick-thin problem’. (95) The thick-thin is concerned with finding the mean significance between the two vices of ‘anything goes’ and ‘not much goes’; Trout is searching for a notion of significance that ‘allow(s) for some substantive interpersonal and interinstituitional differences about what problems are significant without licensing ‘anything goes’ subjectivism’. (95) Trout elucidates the issue by appeal to two examples: for one, Trout seeks a criterion thick enough to make certain problems insignificant even if someone or a lot of people deem it to be such, like, for example, ‘establishing the length of one’s left thumbnail at every five second interval’, while at the same time being thin enough to allow for some matters like ‘establishing whether the short-tailed shrew is the smallest North American mammal with poisonous saliva’ to be significant, even if only a niche of individuals see it as significant. (95) Trout puts forth what he claims is a legitimate criterion: ‘the significance of a problem for S is a function the of weight of the objective reasons S has for devoting resources to solving that problem’. (95)
            This commits his view, Trout claims, to the assumption that ‘we have objective reasons of various sorts for action and that we can weigh these reasons against one another’. (95) This step he sees as natural insofar as he has assumed ‘a broadly Aristotelian approach to epistemology’; Trout argues for the impossibility of carrying out a theory of epistemology without straying into the land of normativity. (96) By ‘objective reasons for action’, Trout is in fact arguing that these reasons for action exist whether or not the cognisor is aware of them. Trout illustrates this by pointing to an example of a ‘doctor who is diagnosing whether or not a patient has a serious disease on the basis of several tests’. Trout argues that if the doctor agrees that the problem is significant because of the ‘objective and moral and prudential reasons for action’, and yet, even if the doctor is not aware of the significance or does not care, that remains ‘intuitively plausible’ that objective reasons still exist by virtue of the consequences of the decision. (96) Trout claims that ‘it would be a mistake’ to assume that the significance of a decision only stands in relation to the consequences of said decision. (96) For one, it is clear according to Trout that ‘duty’ has a role in significance; one has objective reasons as well as obligations for action ‘regardless of the consequences’ if one is a doctor, a police officer, or a teacher. For instance, a doctor has a societal obligation and moral obligation to ‘think clearly about their patients’. (96) Furthermore, certain problems are intrinsically significant regardless of one’s duty or the possible consequences associated with the problem. For instance, Trout argues ‘discovering the basic physical or societal structure of the world’ is an intrinsically significant problem. (96)
 Trout goes on to address certain concerns some may have with this reason-based account of significance. The first goes back to the individual counting the Goodyear blimp and asks: ‘but what if the individual really does believe this is a significant problem of epistemic importance’? (97) Trout argues that, if it is in fact true that the individual has no ‘moral, prudential, epistemic, aesthetic’ basis for his action and remains committed to its significance, then one could say the problem remains significant, but only minutely so (97). Furthermore, Trout argues that this cognisor’s issue most likely stems from ‘pathology’. (97)
The second issue concerns ‘lost causes’; if one takes a particular issue like deciding whether the ‘U.S. health care system is just and efficient’ and one assumes that the American health care system does not lend itself to participation from the citizenry insofar as its policy is concerned, then it might be fair to say that any given individual has relatively no tangible power to put towards solving said problem.  One can meaningfully ask in this case, Trout claims; ‘can this apparent lost-cause be a significant problem for the individual?’ (97) Trout claims that the problem ‘has some significance’ if the individual is concerned with it, and that, furthermore, if in fact the individual does not care, then there are still two senses in which that problem could be significant for the individual. For one, the individual might have an obligation/duty as a citizen; Trout argues that even if short-term gain is unlikely, one has a duty to push forward in the service of improving society. Secondly, pursuing lost-causes has, throughout history, shaped the character of human beings and more largely humanity; Trout argues on the basis of virtue ethics that the issue may gain significance.
The third issue concerns ‘negatively significant issues’; issues cognisors should not devote their resources to even if the issue seems significant. One of the examples Trout uses is that a ‘psychologically healthy person’ ought not recalculate their ‘narrow self-interests’ over and over again. (98) Another example, Trout argues, is that: while some people focus on accumulating wealth with the assumption that it will help them attain happiness, as it turns out,  these people tend to be the least happy among their peers. Trout claims that these examples demonstrate that certain significant things should be avoided if one wants to secure well-being. Insofar as much of this hangs on the notion of well-being, Trout attempts to shed some light on the basis of his notion of well-being. Trout argues that, insofar as human beings exist in the natural world, they abide by natural law in the same way everything else does. Therefore, human beings must be analysable by science, and this could lead towards discovering just what would constitute the meaning of ‘human well-being. Trout argues that scientific data seems to show human well-being consists of something like ‘health, deep social attachments, security, etc.’ (99)
The final section of the chapter consists in Trout illustrating one fact the theory of strategic reliabilism must take into account. To begin, Trout restates his earlier claim; ‘the significance of a problem is determined by the strength of the reasons one has for devoting resources to it’. (99) Now, Trout asks, how does this claim work alongside this plain fact about the human condition: ‘often people through no fault of their own do not have access to those reasons’. (100) For example, ‘trying to predict what gift a spouse would enjoy as a gift’ is seemingly significant for a cognisor if there is an upcoming anniversary. However, if it turns out that the spouse ‘runs off with the neighbor’ weeks before the wedding, the problem seems to lose significance. (100) Furthermore, some people, Trout argues, are right about what is significant but mistaken as to what makes it significant. Trout argues first that any theory that attempts to ‘guide reason’ will have to take this into account; it is more a failure of the human condition than of strategic reliabilism. (100) Secondly, strategic reliabilism provides advice that would in theory promote the virtues of reasoning excellence despite whether a cognisor knows what is significant or not; strategic reliabilism provides reasoning strategies that generally do work, and that’s all that matters. Trout ends the chapter by heralding an era of ‘epistemic responsibility’ in a social setting; society should institutionalise ‘reason-guiding epistemology’ such that it would ‘promote the proper distribution of epistemic responsibility’. (101) The reasoning strategies provided (SPR’s and other reasoning strategies) as well as a conceivably larger role epistemic responsibility could play in society, Trout argues, would be a benefit to society regardless of theoretical problems, and problems regarding implementability insofar as they contribute to general human well-being. Thus concludes chapter six.
Drawing from the arguments above, one could say that the general form of the normative argument Trout is making is: ‘If there is a method at your disposal that would increase your reasoning excellence overall, one ought to adopt it’. While this argument refers to ‘impersonal tools’ that one can plug in values to draw a reliable result, could one make an argument involving other things than tools that could be very personal? For instance, if one considers the effective allocation of cognitive resources as in some sense an imperative to excellent reasoning, could one argue that we have an obligation to use, if the technology is ever developed, some medicine or surgery to artificially augment one’s cognitive capacity? One could surely argue on the basis of intuition that such technology wouldn’t ‘feel right’.  Yet, as we have seen in the previous analysis, one ought to ignore intuitions concerning a less-reliable method in the face of robust statistical data indicating the better overall performance of other models. If augmenting one’s capacity could directly decrease the elapsed time taken to perform a ratiocination, and one considers the costs of reasoning to contribute to epistemic excellence, and furthermore, one in some sense has a moral or prudential obligation to achieve epistemic excellence insofar as it directly relates to human well-being, then is there not a sense in which one is obligated to artificially increase their cognitive capacity? This seems to be a prima facie implication of Trout’s theory.
A further implication one could draw from the work’s normative dimension, I would argue, stems from the notion of ‘intrinsic significance’. Bishop and Trout base their critique of SAE on the basis of its failure to provide practical advice for cognisors; to ‘guide-reason’. Could one not argue that SAE could possess intrinsic significance insofar as it may, its proponents argue anyway, have the capacity to uncover the underlying structure of Knowledge? This might mean that, by virtue of some quasi-plausible application of Bishop and Trout’s standard, there is a way in which SAE becomes significant. Bishop and Trout might argue that somehow the methods of SAE are just inherently backwards; there’s just something wrong with the way SAE attempts to track the truth. However, Bishop and Trout remain silent on the theoretical question of Knowledge and may perhaps not be the most relevant to consider in such a case. On second thought, their theory of strategic reliabilism does seem to feature, at least in some sense, the theoretical flavour of reliable process epistemology; one is justified in believing P if P is arrived at by an SPR? Isn’t strategic reliabilism a by-product of reliable process epistemology, which is itself a by-product or at least contained within SAE? Perhaps the notion of intrinsic significance isn’t necessary to justify SAE under Bishop and Trout’s theory when one considers that the theoretical work done by SAE has served, historically at least, as the basis of strategic reliabilism. By this argument, if strategic reliabilism does in fact one day perform to the practical expectations Bishop and Trout have set up for it, their criticism of SAE will be self refuting insofar as SAE would have led to practical results; namely the result of an epistemologically responsible utopia.[4]
            In conclusion, I have provided an analysis of chapter five and six of Bishop and Trout’s Epistemology and the Psychology of Human Judgement and demonstrated just how the reason-guiding principles of ‘robust reliability’, ‘costs and benefits of excellent reasoning’, and ‘epistemic significance’ presented in these chapters fit within strategic reliabilism. I have also argued that artificially augmented intelligence might have an interesting role to play within strategic reliabilism insofar as normativity is concerned, and that one can use Bishop and Trout’s arguments concerning normativity to justify SAE.


Work cited:

Bishop, Michael A., and J.D. Trout. Epistemology and the psychology of human judgement. Oxford: Oxford University press, 2005. Print


[1] I will refer to Bishop throughout the first half of the paper and to Trout in the second half.
[2] McCreary, C., & Padilla, E. (1977). MMPI differences among black, mexican-american, and white male offenders. Journal Of Clinical Psychology, 33(1), 171-172.
[3] It should be noted that Bishop does not provide a citation to justify this claim.
[4] When one considers that epistemic utopias and augmenting one’s cognitive capacities may be implications of strategic reliabilism, it seems like one problem larger problem with the theory is that it only considers human well-being and not human nature. I will not pursue this objection here because it is mostly intuitive at this point, but it seems like Bishop and Trout may not realise the scope of their claims.

No comments:

Post a Comment