Evolution and truth
One of the problems in having a philosophy related blog is that ideas are hard things to generate on demand, so often you need someone to raise the problems for you to think about. Being naturally (and preternaturally!) lazy, I don't go out looking for problems (of a philosophical nature; the ordinary kind seem to find me like flies find rotting garbage). Hence, this blog is sporadic.
Well, I just tripped over an interesting question raised by Certain Doubts: can we reconcile the Platonic value of truth with an evolutionary view of epistemology? That is, if we think that our knowledge is the outcome of evolution (which I do), can we still think that knowledge is about the getting of truth? Absolute truth, that is, not the purely "good enough for government work" kind that we all rely upon in daily life.
The conundrum is this: evolution (natural selection, anyway) serves only to optimise that which is satisfactory to the past environments of the organism's ancestors. To use a memorable phrase of Hull's, evolution is like the Prussian military academy that turns out officers admirably suited to winning past wars. But our cognitive capacities were not designed to deal with such matters as subatomic physics, comparative biology, or risk analysis in complex economic systems. We were designed to deal with living in certain environments and social groupings that we no longer mostly live in, and the ideal of truth-gathering is well out of that environment. In short, can we trust modified monkey brains, as Darwin posed in another context?
One way to rescue some form of naturalised epistemology f this kind might be to say that we can rely upon our native cognition in cases where they are closely similar to those ancestral conditions, whatever they might have been. But even this fails, since evolution is not, generally speaking, a global optimiser, but a general satisficer. That is, so long as the animal is viable enough, relative to other candidates, then it will procreate its kinds. Some sort of induction is good enough for hominids in their usual environment - as Quine said, "Creatures inveterately wrong in their inductions have a pathetic, but praiseworthy, tendency to die before reproducing their kind". I don't know about the praiseworthiness, which remains to be seen, but Quine's claim was that we have an innate "quality space" honed by selection, which refers because it works.
There's an obvious problem with this, or rather, several obvious problems. First of all, we often make mistakes. The problems of optical illusions have a long history in philosophy of indicating the unreliability of vision, and similar issues arise with our cognition. We are very bad at deductive reasoning, for example, when the variables are abstract or unnatural, as the Wason test shows. We are bad at risk assessment - people are afraid of flying but not of riding in a car, although you have a much higher danger of death or injury in cars than planes. We fear home invaders far more than family, although you are more likely to be attacked by a family member than a stranger, and so on. Our cognitive skills are based on whether or not they contribute to reproduction rather than individual knowledge. Evolution, if we envisage it as a designer, cares nothing for truth, just progeny.
Another problem is that our quality spaces are, if they exist from evolution, very domain specific. While it explains why we have a priori concepts - Kant's synthetic a prioria are evolutionary a posterioria, as Lorenz said - it doesn't guarantee their applicability outside those domains. A third problem is the one of satisfaction; if a concept works most of the time, it doesn't make it reliable all the time, just satisfactory so far. Consider what will be better - the ability to spot a predator if and only if there is a predator, or a jumpiness that works most of the time if there is a predator, but also makes us jump at sudden noises and shadows. There is a cognitive cost to being right all the time, and evolution won't burden us with an expensive solution that in any case would take time to employ, while the leopard is leaping at your throat. Better to startle and run, and be embarrassed, than to be right and dead.
So, can we have absolute truth and natural evolved cognition? I think, as Putnam said, that evolutionary epistemology fails to explain the core problems of epistemology. Or does it?
We often focus on individual cognition as the locus of selection, and in one way this is right, because we are considering the reason we have big brains, and the cognitive apparatus they make possible. But there is another level of analysis - that of the tradition or culture. These, too, evolve, only not biologically but culturally. A society that learns how to use local resources to flourish has knowledge even if the "explanations" given involve ritual magic or religious etiology. It doesn't matter how the efficacy of Amazonian poisons derived from frogs is explained by the local tribes - it matters that the tribes know that it works.
But that brings with it implications of the magical explanations, and so on, which are formed largely in the context not of the physical world as such, but the social world in which explanations asked and offered play functional roles in establishing and maintaining social structure. In short, these explanations are not about adapting to the "world" but to "society". And throughout human history, such cultural evolution has been both minimally adaptive in a given environment (or not, as in the Anasazi case where they overexploited local resources), allowing the society to remain viable, and adaptive in social functional terms allowing agents within the culture to interact in various ways. What it does not do is offer truth, or if it does, this is a truth of pragmatic virtue and convention.
But around 1700 or so, a movement finally took off in which several factors combined to permit human cognition to overcome the limitations of biology and society to an extent. In this movement, specialisation ensured that the explanations given were satisfactory in each domain to those who were best acquainted with the material. It recorded the work in detail, and made a virtue of discovery and theory. I am talking about science, of course.
The prior commitment to Platonic or Aristotelian truth had, in 2000 years, made almost no progress of any real kind. While the logical and theological domains were transformed, actual knowledge of the extra-social world was thin on the ground. When advances were made that did relate to physical facts, such as engineering and manufacturing technologies, before this point, they could easily be lost because that knowledge was held only by a small family tradition or a guild that might be banned or become uneconomic at any time. Yes, truth was a virtue, but one more honoured in the breech than the observance.
It wasn't even enough to be empirical. Many, such as Roger Bacon, Nicholas Oresme, Frederick II, and others had done good empirical observation in the middle ages, but the habit didn't take. With the appearance (in fact the evolution) of specialisation and above all communication of results and the availability of the records to all, science took on the role of cognitive system. Subsequent adaptation of scientific institutions made it more able to generate knowledge than at any time in human existence.
But what is science adapting to, exactly? A naive answer might be that it is adapting to the data. This is a nice simple answer - we can rely upon science because hypotheses are tested and proven (in the Old English sense of "proven", meaning well tested and found reliable, as in the "exception that proves the rule" by testing it). Add a dash of falsificationism, and you are on your way. I think this is grossly mistaken.
Science is adapting to the real world, in some manner. It might help to think of it like this - a model builder invites you to guess what he has made, but it has a sheet of some thickness over it. You can see the points that support the sheet well enough, but the identifying details are hidden behind the cover. How can you find out what it is? You have a stick to poke it with, and ten seconds. Go...
Well, it's a large model, and in ten seconds you can't feel all of it, so you poke at the supported bits in the hope that the shape will become clear. At the end of ten seconds, you guess that it's a model of some guttering. It's actually the Vallis Marinaris on Mars.
Science, like you, has a limit on what it can measure, and there's a a cost to that activity, so some conceptual triage is needed. If something is measured, then within measurement error you have knowledge of that data point alone. The rest is inference, interpolation and generalisation. How do you choose what to measure, and what to infer? Some of these choices are influenced by your personal dispositions and predilections, some by the conventions of the discipline. Others are mostly due to social context, government policies, religious beliefs, and so on. Science has some social functional adaptation after all.
This has led some to claim that science is just another worldview, a claim seized upon by antiscience movements like creationism and intelligent designism. But it isn't. Is is not just another worldview. It is an enterprise that is parallel to worldviews, creates worldviews, and shares a multiplicity of disparate worldviews amongst its practitioners. Worldviews aren't what makes science special.
What makes it special is that the institutions themselves tend to make claims that can be challenged and do get challenged, if there is any reason to. And given that the resources of science as limited like any other, there is always reason to do so. Of course, not everything gets tested in the usual sense, but there is sufficient selection for consonance with prior work, for empirical adequacy, for explanatory power and the ability to generate research programs, that in the medium to longer term, science generates pretty adapted results. Adapted to the empirical world, that is.
I can think of some constraints on science, though. One is, and I haven't seen this discussed anywhere so You Heard It Here First™ unless you didn't, that science cannot progress beyond the ability to teach the next generation how to proceed. If something requires a depth of study that is going to take fifty years to learn, it will be abandoned in favour of either some other more rapid payoff to the individual's career, or the field will be divided into more manageable chunks. What can't be, won't be investigated at all.
So science is not absolute truth. Is it truth at all? Are we being objective here? Who knows? But a truth that is unattainable is not a truth that does much hard work. If the truth we can get through science works enough, that's the best we can hope for. Fortunately we got sufficient of a leg up from evolution to get started, and fortunately we did get started. But the Platonic ideal of truth is a dream of a reality we will never get access to, and is thus an item of religious faith not needed.
Well, I just tripped over an interesting question raised by Certain Doubts: can we reconcile the Platonic value of truth with an evolutionary view of epistemology? That is, if we think that our knowledge is the outcome of evolution (which I do), can we still think that knowledge is about the getting of truth? Absolute truth, that is, not the purely "good enough for government work" kind that we all rely upon in daily life.
The conundrum is this: evolution (natural selection, anyway) serves only to optimise that which is satisfactory to the past environments of the organism's ancestors. To use a memorable phrase of Hull's, evolution is like the Prussian military academy that turns out officers admirably suited to winning past wars. But our cognitive capacities were not designed to deal with such matters as subatomic physics, comparative biology, or risk analysis in complex economic systems. We were designed to deal with living in certain environments and social groupings that we no longer mostly live in, and the ideal of truth-gathering is well out of that environment. In short, can we trust modified monkey brains, as Darwin posed in another context?
One way to rescue some form of naturalised epistemology f this kind might be to say that we can rely upon our native cognition in cases where they are closely similar to those ancestral conditions, whatever they might have been. But even this fails, since evolution is not, generally speaking, a global optimiser, but a general satisficer. That is, so long as the animal is viable enough, relative to other candidates, then it will procreate its kinds. Some sort of induction is good enough for hominids in their usual environment - as Quine said, "Creatures inveterately wrong in their inductions have a pathetic, but praiseworthy, tendency to die before reproducing their kind". I don't know about the praiseworthiness, which remains to be seen, but Quine's claim was that we have an innate "quality space" honed by selection, which refers because it works.
There's an obvious problem with this, or rather, several obvious problems. First of all, we often make mistakes. The problems of optical illusions have a long history in philosophy of indicating the unreliability of vision, and similar issues arise with our cognition. We are very bad at deductive reasoning, for example, when the variables are abstract or unnatural, as the Wason test shows. We are bad at risk assessment - people are afraid of flying but not of riding in a car, although you have a much higher danger of death or injury in cars than planes. We fear home invaders far more than family, although you are more likely to be attacked by a family member than a stranger, and so on. Our cognitive skills are based on whether or not they contribute to reproduction rather than individual knowledge. Evolution, if we envisage it as a designer, cares nothing for truth, just progeny.
Another problem is that our quality spaces are, if they exist from evolution, very domain specific. While it explains why we have a priori concepts - Kant's synthetic a prioria are evolutionary a posterioria, as Lorenz said - it doesn't guarantee their applicability outside those domains. A third problem is the one of satisfaction; if a concept works most of the time, it doesn't make it reliable all the time, just satisfactory so far. Consider what will be better - the ability to spot a predator if and only if there is a predator, or a jumpiness that works most of the time if there is a predator, but also makes us jump at sudden noises and shadows. There is a cognitive cost to being right all the time, and evolution won't burden us with an expensive solution that in any case would take time to employ, while the leopard is leaping at your throat. Better to startle and run, and be embarrassed, than to be right and dead.
So, can we have absolute truth and natural evolved cognition? I think, as Putnam said, that evolutionary epistemology fails to explain the core problems of epistemology. Or does it?
We often focus on individual cognition as the locus of selection, and in one way this is right, because we are considering the reason we have big brains, and the cognitive apparatus they make possible. But there is another level of analysis - that of the tradition or culture. These, too, evolve, only not biologically but culturally. A society that learns how to use local resources to flourish has knowledge even if the "explanations" given involve ritual magic or religious etiology. It doesn't matter how the efficacy of Amazonian poisons derived from frogs is explained by the local tribes - it matters that the tribes know that it works.
But that brings with it implications of the magical explanations, and so on, which are formed largely in the context not of the physical world as such, but the social world in which explanations asked and offered play functional roles in establishing and maintaining social structure. In short, these explanations are not about adapting to the "world" but to "society". And throughout human history, such cultural evolution has been both minimally adaptive in a given environment (or not, as in the Anasazi case where they overexploited local resources), allowing the society to remain viable, and adaptive in social functional terms allowing agents within the culture to interact in various ways. What it does not do is offer truth, or if it does, this is a truth of pragmatic virtue and convention.
But around 1700 or so, a movement finally took off in which several factors combined to permit human cognition to overcome the limitations of biology and society to an extent. In this movement, specialisation ensured that the explanations given were satisfactory in each domain to those who were best acquainted with the material. It recorded the work in detail, and made a virtue of discovery and theory. I am talking about science, of course.
The prior commitment to Platonic or Aristotelian truth had, in 2000 years, made almost no progress of any real kind. While the logical and theological domains were transformed, actual knowledge of the extra-social world was thin on the ground. When advances were made that did relate to physical facts, such as engineering and manufacturing technologies, before this point, they could easily be lost because that knowledge was held only by a small family tradition or a guild that might be banned or become uneconomic at any time. Yes, truth was a virtue, but one more honoured in the breech than the observance.
It wasn't even enough to be empirical. Many, such as Roger Bacon, Nicholas Oresme, Frederick II, and others had done good empirical observation in the middle ages, but the habit didn't take. With the appearance (in fact the evolution) of specialisation and above all communication of results and the availability of the records to all, science took on the role of cognitive system. Subsequent adaptation of scientific institutions made it more able to generate knowledge than at any time in human existence.
But what is science adapting to, exactly? A naive answer might be that it is adapting to the data. This is a nice simple answer - we can rely upon science because hypotheses are tested and proven (in the Old English sense of "proven", meaning well tested and found reliable, as in the "exception that proves the rule" by testing it). Add a dash of falsificationism, and you are on your way. I think this is grossly mistaken.
Science is adapting to the real world, in some manner. It might help to think of it like this - a model builder invites you to guess what he has made, but it has a sheet of some thickness over it. You can see the points that support the sheet well enough, but the identifying details are hidden behind the cover. How can you find out what it is? You have a stick to poke it with, and ten seconds. Go...
Well, it's a large model, and in ten seconds you can't feel all of it, so you poke at the supported bits in the hope that the shape will become clear. At the end of ten seconds, you guess that it's a model of some guttering. It's actually the Vallis Marinaris on Mars.
Science, like you, has a limit on what it can measure, and there's a a cost to that activity, so some conceptual triage is needed. If something is measured, then within measurement error you have knowledge of that data point alone. The rest is inference, interpolation and generalisation. How do you choose what to measure, and what to infer? Some of these choices are influenced by your personal dispositions and predilections, some by the conventions of the discipline. Others are mostly due to social context, government policies, religious beliefs, and so on. Science has some social functional adaptation after all.
This has led some to claim that science is just another worldview, a claim seized upon by antiscience movements like creationism and intelligent designism. But it isn't. Is is not just another worldview. It is an enterprise that is parallel to worldviews, creates worldviews, and shares a multiplicity of disparate worldviews amongst its practitioners. Worldviews aren't what makes science special.
What makes it special is that the institutions themselves tend to make claims that can be challenged and do get challenged, if there is any reason to. And given that the resources of science as limited like any other, there is always reason to do so. Of course, not everything gets tested in the usual sense, but there is sufficient selection for consonance with prior work, for empirical adequacy, for explanatory power and the ability to generate research programs, that in the medium to longer term, science generates pretty adapted results. Adapted to the empirical world, that is.
I can think of some constraints on science, though. One is, and I haven't seen this discussed anywhere so You Heard It Here First™ unless you didn't, that science cannot progress beyond the ability to teach the next generation how to proceed. If something requires a depth of study that is going to take fifty years to learn, it will be abandoned in favour of either some other more rapid payoff to the individual's career, or the field will be divided into more manageable chunks. What can't be, won't be investigated at all.
So science is not absolute truth. Is it truth at all? Are we being objective here? Who knows? But a truth that is unattainable is not a truth that does much hard work. If the truth we can get through science works enough, that's the best we can hope for. Fortunately we got sufficient of a leg up from evolution to get started, and fortunately we did get started. But the Platonic ideal of truth is a dream of a reality we will never get access to, and is thus an item of religious faith not needed.