I've been thinking about how (and when) to have moral conviction. In particular, I've been considering something like a Jamesian defense of the idea that we can legitimately adopt (or embrace) moral convictions when we are presented with a "forced and momentous" option. (See Section IX of "The Will to Believe" for the idea.)
A colleague of mine suggested that perhaps we don't need to go all the way to conviction in such cases. Instead, we might accept one option over the other as our "working hypothesis" (by our own lights). The point is that if conviction is a form of belief, it might not be rational to adopt any particular belief between the options themselves (e.g. believing that option A is the right one), but we might accept one option as the one we're going to take, and treat as our "working hypothesis"--i.e. we're going to treat that option as if it were the right one.
(He referenced Bas van Fraassen's work in the philosophy of science as the source of this idea. It's supposed to resolve the problem of adopting certain scientific theories despite an anti-realistic view of truth in science; roughly, that there's not an "objective," theory-independent realm of scientific truth, which would make believing that one's theory itself is true sort of awkward.)
I think my colleague might be right that in some instances, we needn't go all the way to conviction. But I have reservations. Suppose I am faced with (sorry to be dramatic) a life or death sort of situation--it might involve my own life, or someone else's. I have to decide what to do, which values to honor in the case. What I said to my friend is: "Maybe I could put my own life at risk for the sake of a "working hypothesis"; I'm not sure about someone else's..." (I'm not inclined to think I'd do either.) So, I don't think "working hypotheses" are always going to cut it. Am I wrong about this? (Or am I splitting hairs?)
For a lively illustration, go read (or re-read) Billy Budd.
No comments:
Post a Comment