Friday 25 January 2013

Safety, Sensitivity and the Value of Knowledge

Epistemologists who wish to provide a modal analysis of knowledge are divided over whether the safety or the sensitivity condition offers a correct necessary condition for knowledge.  The two are usually parsed in (something like) the following way:
Safety: S’s belief p is safe iff in nearly all (if not all) nearby possible worlds w in which S forms a belief p using the same belief-forming methods as the actual world, p is true.
Sensitivity:  S’s belief p is sensitive iff in the closest possible world in which S forms her belief using the same belief-forming methods as the actual world and in which p is false, S does not believe p.
Sosa suggested the following scenario, which has been influential in encouraging philosophers to favour safety over sensitivity:
On my way to the elevator I release a trash bag down the chute from my high rise condo.  Presumably I know my bag will soon be in the basement. But what if, having been released, it still (incredibly) were not to arrive there? That presumably would be because it had been snagged somehow in the chute on the way down (an incredibly rare occurrence), or some such happenstance. But none such could affect my predictive belief as I release it, so I would still predict that the bag would soon arrive in the basement. My belief seems not to be sensitive, therefore, but constitutes knowledge anyhow, and can correctly be said to do so. [‘How to Defeat Opposition to Moore’: 145-6]
Now, I’m not convinced that the condo owner really has knowledge in this case—as opposed to a true, justified belief, or knowledge of some closely related proposition, such as that it is highly probable that the bag is in the basement—and I also think that examples with the same structure as Sosa’s offer some evidence against the safety condition and in favour of sensitivity.

For instance, I once went to meet my supervisor on the fifth floor of our building, only to discover that his office was not there: he had moved to the sixth floor.  Two things strike me about this.  Firstly, it seems to me perfectly reasonable to hold that I didn’t have knowledge that he was on the fifth floor, even when this was true and I was justified (and safe) in believing it.  I was not properly connected to the facts: when the justified, true (and safe) belief became a justified, false belief, I was oblivious to the change.  Secondly, and relatedly, it is often thought that we value knowledge resides (at least in part) to its connection to felicitous action.  A mere true belief is, all things being equal, less valuable than knowledge because the vagaries of the world ensure that it has a tendency to turn into a false belief.  Known propositions are the sort that allow us to successfully navigate our environment, and false beliefs will (in ordinary circumstances) lead us to behave infelicitously (e.g. by going to the fifth rather than the sixth floor).  The thought here is that there are many scenarios akin to the one above, and, hence, sensitivity seems to be a necessary condition for the sort of belief that allows a person to successfully navigate her environment.

Tuesday 8 January 2013

Nazi Philosophers

This article appeared in the Telegraph a few days ago chronicling the rise of Nazi philosophers in Hitler's Germany:
‘[Most academics in Germany] did not merely reconcile themselves to Hitler. They enthusiastically espoused Nazi ideology, and came up with all sorts of elaborate reasons to justify the purging of Jews, the persecution of dissidents, and the conquest and oppression of other nations. They went out of their way to flaunt their loyalty to the Nazi cause.’
The diagnosis of why this happened that caught my eye:
‘Their deluded enthusiasm for the debased ideology of the Nazis is an instance of the fact that people who spend their lives debating abstract issues can become so distanced from the quotidian world that they can no longer see the obvious.’
This is a thought that I’m not wholly hostile to (although it's not clear it explains the uptake of Nazism amongst academics: plenty of people who occupied the "quotidian world" were Nazis too), but Palmer continues:

‘Philosophers are particularly vulnerable to this form of idiocy, because there is so little content to their subject.  It does not consist in the discovery of new facts, and philosophical theories are only seldom decisively refuted by anything.  Fashion is often the most important factor in explaining which doctrines come to be accepted by any group of academic philosophers.’
I think there is a more charitable explanation to be had.  Philosophy involves teasing out the consequences of various commitments.  This involves three responsibilities, one critical, one ampliative and one justificatory. [I'm drawing on Brandom's Reason in Philosophy here.]  The critical responsibility is to rectify mutually incompatible commitments, it is to ensure that one's system of beliefs is consistent.  If one maintains P, $\neg$Q and P $\rightarrow$ Q, then at least one of these commitments must be jettisoned.  The ampliative responsibility is to become aware of the material consequences of one's current commitments.  Acknowledged commitments give rise to further commitments that one may not yet be aware of.  The responsibility to make oneself aware of these further commitments and to integrate them appropriately into the whole is a responsibility that aims at completeness.  Whereas the ampliative responsibility looks inferentially downstream, the justificatory responsibility looks inferentially upstream.  Agents are responsible for offering reasons for their commitments, by claiming commitments that entitle them to their current commitments.  The justificatory responsibility is directed at ensuring that one's network of commitments is warranted.  Philosophy is aimed at acquiring a certain kind of understanding, an integration of our beliefs into a coherent whole.  Unearthing our inferential commitments however can only take us so far; it can tell us that P, $\neg$Q and P $\rightarrow$ Q are not compossible, but it does not tell us what thereby to so; whether to reject our belief P, our belief $\neg$Q, or whether to reconsider the conditional itself.  As the saying goes, one man’s modus ponens is another man’s modus tollens.

This goes some way to explaining why philosophers aren’t inoculated from evil political ideologies.  An ability to reason is not, by itself, sufficient to steer away from such things.  In fact, an ability to reason can aid dogmatism, as it makes it easier to defend a view, any view, from counterarguments.  The lesson I think ought to be drawn from the sad prevalence of Nazi philosophers isn’t that philosophy “lacks content” (however that view is parsed), but that rational nous alone isn’t enough to get at the truth; we also require the intellectual virtues of open-mindedness, independence of thought, intellectual honesty and humility, and self-awareness.  Reason, in this sense, is rather like courage: whether it is used for good or ill depends on the other character traits of those who employ it.