This site is not maintained. Click here for the new website of Richard Dawkins.


← Do we need objective morals?

Quine's Avatar Jump to comment 13 by Quine

A very complex subject at the least. Yes, I also recommend Sam's book to all as a starting point if you are not already coming from a Philosophy of Ethics background.

I am sure we all have had these discussion with our theist friends and family. Sometimes it is stated as "absolute" v. "relative" morality. Most theists I have met associate relative morality with some kind of "relativism" which is then held out as inferior or even subversive.

As Jos mentioned above, you really can't have morals that are completely independent of any minds, not the least of which is the mind that controls the actions that result from some kind of understanding, or interpretation, of said morals. Then we still have the question of when "subjective" ends and "objective" begins. Are there things that are neither clearly objective nor subjective? Are there moral positions that people generally recognize as "not even wrong"?

Hume still stands in his observation that you can't work out how things ought to be from knowledge of how things are. And if you could, you only have the knowledge of the world that we have today; what if something we find out from, say, brain research changes the "objective" moral position we did work out from what we know today? Beyond, knowledge, there are issues of knowability. An adult can generally know things that are not possible for a very young child to know. What if to get to "objective" morals we would need to know things that not possible for our brains to process at this point in our evolution? Perhaps we need to wait to grow another layer over the existing cortex so as to see all the current neural workings and motivations in abstraction.

Can an objective analysis from subjective data yield objective results? Could there be a crowd sourced algorithm that got subjective input from everyone, but did not necessarily reflect the opinions of anyone, be constructed to give us objective results? Andy Thomson has done quite a bit of work putting together problems in which to test potential moral positions. What if we could test algorithmic positions that way?

Bottom line, there is no way to know if we have, or can have, truly "objective" positions, and whatever we decide to use is what we are going to use, regardless.

Wed, 25 Jul 2012 05:19:07 UTC | #950020