Here.
This test is a cut above most of the silly self-evaluation tests one finds on the Web. It’s worth the five minutes or so that it takes to complete.
(Via Ann Althouse, who got it via other bloggers to whom she links and whose test results are interesting to compare.)
UPDATE: Some of the commenters responding to Ann Althouse think the test is a typical bunch of manipulative rhetorical gotchas from people who think they know better. That may be right.
I took it twice and thought I’d said exactly the same, but didn’t. Still general was the same: medium morality (.53), low interference (0.0), and high universality (1.00). Sure if someone does it maybe it isn’t a big deal – if a whole society does it. . . well. I’m not sure that is all that consistent or admirable a way to look at life–but I do think it fits with our vision of individual liberty and universal truths.
My stats:
* Moralising Quotient was 0.00, which means I’m “more permissive than average” (average = 0.38).
* Interference Factor was 0.00, which means I’m “less likely to recommend societal interference in matters of moral wrongdoing” (average = 0.22).
My Universalising Factor was -1, which “indicates that you saw no moral wrong in any of the activities depicted in these scenarios” (average = 0.54).
Odd, eh? I attribute my high permissiveness score to my definition of morality, which requires, in my mind, that some harm is being visited upon a person or group by an activity for it to qualify as immoral. As a consequence, if no harm is being delivered, society has no business interfering with that person’s choices or private behavior, whatever our personal tastes. Seems like a reasonably clear and easy to follow line to me.
For me the problem with the way the questions was phrased was their use of the term “wrong”.
Something can be wrong without being immoral, but they are implicitly assuming that “wrong” is a moral choice. However, something can be wrong because it’s unwise; that’s not the same thing.
I came up 0 on the “interference” scale, because I believe people have a right to be unwise.
Results
Your Moralising Quotient is: 0.63.
Your Interference Factor is: 0.40.
Your Universalising Factor is: 0.60.
A personal-morality response makes use of universal claims about right and wrong, but tends to see these as being a private matter and not as being a legitimate target of societal intervention.
moralizing .6, interfer. 0.0, univers .8.
The authors of the quiz don’t recognize the damage done to the self-image of the perpetrators or the the damaging effect on the world in general (in the form of future actions and opinions) due to the experiences.
They would like to isolate the actions. We don’t live in a vacuum.
(Although Donne refers to the death of an individual) “No man is an island….”.
.0, .0, .17. Fun fun. Basically the same as Michael’s, but received a little preachiness by the site as to my views on ‘wrong’ being equated with external (non-societal) morality. Whatever. Michael’s last paragraph describe my views almost exactly, with the whole definition of ‘harm’ possibly being under contention, as well as the whole ‘these actions existing in a vaccuum’ bit being debateable.
Very much agreed with Tyouth.
Someone may not suffer any visible ill effects, but perhaps their action cuts them off from positive effects they’d be able to experience were it not for their action. Or perhaps their action harms their character. We don’t live in a vacuum; saying “no harm came of it” is a stronger conclusion than can be made in a valid way.
My wife commented to me that one of the unfortunate things about philosophers is that they aren’t mathematicians. When philosophers discover a paradox, they write paper after paper after paper about the paradox, trying to resolve it. When mathematicians discover a paradox, they re-evaluate the model that caused it. Philosophers seem to have a lot of trouble actually evaluating underlying models, though. Instead, they assume some model is correct (in this case, their harm-based moral model) and they criticize the details of conclusions people draw from other models.
I don’t think it ever even occurs to them that their model of “harm” might not be the best one.
LotharBot, that is a very insightful comment.
Thanks. After I posted it, I worried I might have been rambling incoherently. It was well after 3 AM when I wrote it, and I’ve been known to write some wacky stuff at 3 AM…
One thing I’ve learned in many years on the internet is that you can’t persuade anybody of anything if you’re not willing to give honest evaluations that make them feel respected. People don’t feel respected when you use your model (of morality, of the nature of the universe, etc.) to judge their conclusions. They feel preached at. Telling someone their conclusions don’t match your model isn’t very enlightening.
There are some methods of persuasion that I’ve found actually work:
1) evaluate their conclusions using their own model. In order to do this, you have to take the time to really understand their model as they describe it (it doesn’t work to say “I read an article that explains why people like you believe what they do”.) From there, you can evaluate conclusions they give in terms of whether or not they’re consistant with the model you’ve been presented. And, of course, always be open to the possibility that you misunderstood their model, and make sure to point out the good as well as the bad.
2) evaluate the model itself. Again, you have to really understand it (and if you think you can understand, say, a Republican’s beliefs by reading Democrat-written articles/websites about Republicans, you’re badly mistaken.) If someone says they don’t believe in thing X because of reason Y, you can discuss whether or not it makes sense to model thing X in a way that it’s invalidated by reason Y. (As an example, see my wife’s post on predestination and free will, specifically the couple paragraphs starting with “From a secular philosophical point of view, I think people have things all backwards.“) Point out ways in which their model seems insufficient or contrary to observation, and ask fair but challenging questions about it.
3) Describe your model to them, and compare and contrast it with theirs (pointing out both its strengths and its weaknesses.) Get them thinking about things from that perspective — evaluating models — and get them thinking about multiple models. In some sense, this is what is really meant by moving people into the “marketplace of ideas” — bringing them to a point where they’re really comparing ideas head-to-head, not just evaluating conclusions based on their own preconceptions but actually testing their preconceptions and others’ preconceptions against observations.
One absolutely key point to each of these is that you have to deal with entire models, and often even entire systems of models. If you want to evaluate my model of morality in a way that’s respectful to me, you have to deal with my model of harm as well. If you want to evaluate my thoughts on free will and predestination, you have to understand my definitions of free will and predestination. If you evaluate my model using your definitions, you’ll probably find my model doesn’t work — but that doesn’t enlighten me one bit. The difficulty here is that it takes an awful lot of time and an awful lot of empathy to get to the point where you understand somebody well enough that you can really deal with their models respectfully. (Most of the people in the predestination/free will thread cited above have been developing trust and understanding with each other over the course of 5 or more years, and we’re just now reaching the point where that sort of discussion can be honestly had.)
The quiz in the original post, I thought, was of the “preachy” variety. It basically told me that my conclusions wouldn’t work under those people’s model, and that therefore something’s wrong with them (as if my model should be expected to give the same conclusions as theirs.) To its credit, it did at least say I was being consistant (an attempt at method #1), but it never really moved beyond that into addressing my model. In some sense, this is a recursion problem — it *did* try to evaluate my moral model (or, one that was superficially similar, anyway) but still evaluated it based on their model of harm. It looked at part of my model (on morality) but evaluated it in the context of their own model (on harm). That left me feeling disrespected, misunderstood, and preached at.
I think that may explain the “manipulative rhetorical gotchas” sentiment that’s been expressed. That’s usually the knee-jerk response I have to arguments that are based on subtly different definitions than the ones I use… my feelings of manipulation and rhetorical entrapment usually signify a slight disconnect in language usage.