Evolving representations
So, carrying on the line of argument from the Abstract and Concrete in Biology post, let us ask the obvious question: Why do scientific abstractions, or models, represent the world at all?
One way to approach this philosophically is to follow Russell and say that they denote. That is, the variables of the model have a true interpretation. But this doesn't help us here - it may be that they do, but why do they? Quine suggested that there is no answer to this really - whatever the variables are of our best scientific theories, those are what we must say exist. But, apart from a cryptic comment, he doesn't tell us how this comes to be, either.
The answer often given is one I have some sympathy for. That is, scientific theories evolve by a process very like natural selection. This view goes back to T. H. Huxley, who noted that scientific theories are engaged in a struggle for survival, and that the fittest survive, but it begs a question. We are using the model of natural selection to explain how theories evolve - but natural selection is itself a theory. Isn't this circular?
There is another problem with this view - one noted by philosopher Kim Sterelny. Selection can leave a population (in this case of scientists holding a particular view) stranded on a suboptimal fitness peak, unable to attain a better peak nearby (or distant). So if science is a selection process, why do we have warrant to think it has found an optimal ontology? Isn't this reason to think that science is basically just finding something that, to use a term of Herbert Simon's, satisfices rather than optimises our epistemic commitments? That way lies social constructionism.
Let's look at each of these in turn. First the petitio claim. Natural selection is basically a model, yes. As a model it implies that in any interpretation in which the conditions required for its outcomes hold, and in which nothing countervails, the results will be necessarily selective. It's a theory when applied to various cases of biology (and here of conceptual evolution), but the model is a fact of logic. Sometimes this is misleadingly said to imply that natural selection is a tautology, as if that were a bad thing. It is a tautology - something that is necessarily true. But it is only true when the conditions apply, and that relies on empirical observation. So while the bare model is tautological and therefore not informative (in one way - it certainly came as a surprise to those who developed it, from Adam Smith to Darwin and on), the application is not tautological. And we can see this by considering cases where it fails, such as random drift in cases of no selective pressure. Remember this - it will become important shortly.
So if we got a natural selection model by trial and error, or some other selection-like process, it doesn't matter. The origins of something don't affect its veracity. To claim that it does is to commit the Genetic Fallacy. If God handed it down on the third tablet of commandments, it would still be true. So we can dispose of the circularity objection.
Sterelny's objection is rather more serious. But there is a way around it, and it has to do with the adaptive landscape itself, as I discussed before on the work of Sergey Gavrilets. Any reasonable account of evolution in biology, and I'm going to say also of concepts and science, must understand that the number of variables in the adaptive landscape is very large. When this occurs, at a certain point, there are likely to be regions of interconnected ridges in fitness space that are not too much less fit than the peaks (Gavrilets calls these "large components"). Since a population of evolving entities will scatter around the peak, some will be able to drift at random to other peaks, even though they are all of high fitness (that is, strongly, but not too strongly, selected). This is like Wright's genetic drift, but doesn't require reasons like small population size to traverse the fitness valleys. In short, there are usually pathways from here to there.
If this is true, and I see no reason to doubt it in biology or scientific evolution, then we can expect that low-fitness theories will have been eliminated, and their ontologies with them. So we have a reason to think that the "surviving" ontologies are likely to denote, even if there is some slop or contradiction in elements of the competing theories.
Given the interconnectedness of scientific ideas - when geology and astronomy as well as chemistry and physics are all used in evolutionary hypotheses, for example - it is unlikely that this consilience is just an accident. So we can warrantedly believe that the variables of our best attested theories denote.
But that "best attested" is the kicker. Not all theories or explanations have the same degree of attestation. When we try to develop a model of striped animals, for instance, we are not faced with nice cleanly demarcated classes. We have to construct and test them, and there is a lag between testing and attestation. That is, we have to find out what are the best classes of things to explain, and this is a matter of iterations of hypothesis, testing, and refining.
So it follows that we will have less attested generalisations in our best models, and they are to be checked against both evidence (in terms of being part of the best available model) and conceptual difficulty. We seek to find those generalisations that best cover the terrain, but sometimes the terrain itself is hard to isolate out. This is where I think conceptual analysis plays a role.
When generalisations cause ambiguity, they are immediately suspect. One cannot eliminate ambiguity, and in fact it may be one way that science is able to escape from local high fitness peaks and move on. But something that has been a long standing problem, and shows no evidence of being refined despite attempts to do so, ought to be a prime candidate for revision, disambiguation, or elimination. Of course, it is important that we eliminate the scientific concepts rather than the philosophical ones (at least, for science - philosophical ambiguities are also subject to revision, in philosophy) and so it's important that the concepts are actually scientific. To this end, using intuitions or older literature won't suffice. Nor has that been the usual philosophical practice - Mill for instance used the best scientific ideas of his day when discussing classification in the System of Logic, and so on for Russell, Carnap, and other analytic philosophers. They have flocked, as it were, around the centres of epistemic activity, not the arid and largely meaningless ideas in the air of common discourse, or that homonym for it, "introspective intuitions".
Next, a discussion about what makes some idea an abstraction, and where it is...
One way to approach this philosophically is to follow Russell and say that they denote. That is, the variables of the model have a true interpretation. But this doesn't help us here - it may be that they do, but why do they? Quine suggested that there is no answer to this really - whatever the variables are of our best scientific theories, those are what we must say exist. But, apart from a cryptic comment, he doesn't tell us how this comes to be, either.
The answer often given is one I have some sympathy for. That is, scientific theories evolve by a process very like natural selection. This view goes back to T. H. Huxley, who noted that scientific theories are engaged in a struggle for survival, and that the fittest survive, but it begs a question. We are using the model of natural selection to explain how theories evolve - but natural selection is itself a theory. Isn't this circular?
There is another problem with this view - one noted by philosopher Kim Sterelny. Selection can leave a population (in this case of scientists holding a particular view) stranded on a suboptimal fitness peak, unable to attain a better peak nearby (or distant). So if science is a selection process, why do we have warrant to think it has found an optimal ontology? Isn't this reason to think that science is basically just finding something that, to use a term of Herbert Simon's, satisfices rather than optimises our epistemic commitments? That way lies social constructionism.
Let's look at each of these in turn. First the petitio claim. Natural selection is basically a model, yes. As a model it implies that in any interpretation in which the conditions required for its outcomes hold, and in which nothing countervails, the results will be necessarily selective. It's a theory when applied to various cases of biology (and here of conceptual evolution), but the model is a fact of logic. Sometimes this is misleadingly said to imply that natural selection is a tautology, as if that were a bad thing. It is a tautology - something that is necessarily true. But it is only true when the conditions apply, and that relies on empirical observation. So while the bare model is tautological and therefore not informative (in one way - it certainly came as a surprise to those who developed it, from Adam Smith to Darwin and on), the application is not tautological. And we can see this by considering cases where it fails, such as random drift in cases of no selective pressure. Remember this - it will become important shortly.
So if we got a natural selection model by trial and error, or some other selection-like process, it doesn't matter. The origins of something don't affect its veracity. To claim that it does is to commit the Genetic Fallacy. If God handed it down on the third tablet of commandments, it would still be true. So we can dispose of the circularity objection.
Sterelny's objection is rather more serious. But there is a way around it, and it has to do with the adaptive landscape itself, as I discussed before on the work of Sergey Gavrilets. Any reasonable account of evolution in biology, and I'm going to say also of concepts and science, must understand that the number of variables in the adaptive landscape is very large. When this occurs, at a certain point, there are likely to be regions of interconnected ridges in fitness space that are not too much less fit than the peaks (Gavrilets calls these "large components"). Since a population of evolving entities will scatter around the peak, some will be able to drift at random to other peaks, even though they are all of high fitness (that is, strongly, but not too strongly, selected). This is like Wright's genetic drift, but doesn't require reasons like small population size to traverse the fitness valleys. In short, there are usually pathways from here to there.
If this is true, and I see no reason to doubt it in biology or scientific evolution, then we can expect that low-fitness theories will have been eliminated, and their ontologies with them. So we have a reason to think that the "surviving" ontologies are likely to denote, even if there is some slop or contradiction in elements of the competing theories.
Given the interconnectedness of scientific ideas - when geology and astronomy as well as chemistry and physics are all used in evolutionary hypotheses, for example - it is unlikely that this consilience is just an accident. So we can warrantedly believe that the variables of our best attested theories denote.
But that "best attested" is the kicker. Not all theories or explanations have the same degree of attestation. When we try to develop a model of striped animals, for instance, we are not faced with nice cleanly demarcated classes. We have to construct and test them, and there is a lag between testing and attestation. That is, we have to find out what are the best classes of things to explain, and this is a matter of iterations of hypothesis, testing, and refining.
So it follows that we will have less attested generalisations in our best models, and they are to be checked against both evidence (in terms of being part of the best available model) and conceptual difficulty. We seek to find those generalisations that best cover the terrain, but sometimes the terrain itself is hard to isolate out. This is where I think conceptual analysis plays a role.
When generalisations cause ambiguity, they are immediately suspect. One cannot eliminate ambiguity, and in fact it may be one way that science is able to escape from local high fitness peaks and move on. But something that has been a long standing problem, and shows no evidence of being refined despite attempts to do so, ought to be a prime candidate for revision, disambiguation, or elimination. Of course, it is important that we eliminate the scientific concepts rather than the philosophical ones (at least, for science - philosophical ambiguities are also subject to revision, in philosophy) and so it's important that the concepts are actually scientific. To this end, using intuitions or older literature won't suffice. Nor has that been the usual philosophical practice - Mill for instance used the best scientific ideas of his day when discussing classification in the System of Logic, and so on for Russell, Carnap, and other analytic philosophers. They have flocked, as it were, around the centres of epistemic activity, not the arid and largely meaningless ideas in the air of common discourse, or that homonym for it, "introspective intuitions".
Next, a discussion about what makes some idea an abstraction, and where it is...
<< Home