March 12th, 2014 by Elijah Weber · No Comments
Words are a powerful thing. Historically, certain words have been used to debase and degrade historically marginalized groups, with the goal of reenforcing their lesser social standing. Today, most of these words have become socially unacceptable, rejected on the same grounds that we reject the underlying attitudes from which they typically flow.
Now, a new set of words is coming under the same set of criticisms. Seattle Seahawks cornerback Richard Sherman recently made headlines when he commented that calling someone a “thug” is really just “the accepted way of calling someone the n-word.” Sheryl Sandberg, of Facebook and Lean In fame, has raised similar concerns about the word ‘bossy,’ contending that its use is a means of discouraging women from taking leadership roles and actively participating in group decision-making procedures.
There is, I think, something to both of these concerns. Richard Sherman is clearly not a “thug” in the sense of being some sort of gang-affiliated criminal. He’s an eloquent, thoughtful, Stanford educated man, a modern-day Horatio Alger who worked himself out of a difficult home life in Watts, CA to the pinnacle of success in his chosen profession. But he’s also a confident, at-times arrogant man whose in-your-face style is off-putting to certain people. One might worry that those who would call him a “thug” are expressing the attitude usually associated with the n-word, but in a manner that society seems to tolerate.
One might have similar concerns about the word ‘bossy.’ Clearly, a big part of Sheryl Sandberg’s success is due to her willingness to reject the idea that being a leader is a negative character trait for a woman to possess. And the notion that women are socialized to avoid taking on leadership roles is at least one plausible explanation for the overwhelming gender disparity that exists in virtually every corporate boardroom in America. If ‘thug’ is the new n-word, perhaps ‘bossy’ should be regarded as the new b-word.
But are these updated versions of seemingly prejudiced language really comparable to their outdated counter-parts? There’s at least one way in which they are not. The n-word, as well as various derogatory terms for other racial minority groups, women, and gay people, were directly utilized in the historical subjugation of the groups to which these terms typically refer. Quite simply, black people were viciously beaten, sexually abused, and killed while being referred to using the n-word. Women have been called the b-word as they are raped, sexually assaulted, and systematically denied the same privileges that men receive. ‘Thug’ and ‘bossy’ don’t have these sorts of historical associations, and one might think that this makes in difference for how seriously we ought to take their usage.
The deeper question is how much historical factors matter. While ‘thug’ and ‘bossy’ do not have the same history of oppression behind them, they do seem to carry similarly negative attitudes of disapproval and inferiority. These newer terms also suggest, as do their historical predecessors, that certain sorts of conduct are inappropriate for certain sorts of people, but based on factors that seem irrelevant to the appropriateness of that conduct. Being a leader is no more improper for a woman, for example, than being a voter is for a black person. If the attitudes behind a word are what matters, rather than the history of its use, then perhaps ‘thug’ and ‘bossy’ ought to be treated with the same social stigma that is currently accorded to derogatory terms with deeper historical connections .
What do you think? Are words like ‘thug’ and ‘bossy’ comparable to other negative words that have been directed at particular groups? If so, what should we think about this? If not, how are they different?
Tags: Applied Ethics · Political and Legal Philosophy · Social Ethics
February 26th, 2014 by Elijah Weber · 4 Comments
Recently, the NFL has proposed a rule-change that would impose a 15-yard penalty for using the N-word while on the field of play. This proposal has sparked a good deal of debate on a variety of issues ranging from how such a rule would be imposed to whether this word is, in fact, offensive in all cases.
One thing seems pretty clear–there’s a real debate here, particularly about whether it’s acceptable for African-American people to use this word toward each other, and whether the allegedly affectionate use of the term ought to be treated as similar in kind to the racial slur.
But there’s another issue here–the NFL has a team called the Redskins, which many people regard as a racial slur against Native American people. Elsewhere, I’ve discussed whether it’s ever acceptable to use the name of a particular group, whether derogatory or not, as a mascot for a sports team. The proposed rule change banning the N-word has reinvigorated this debate, and cast it in a slightly different light.
One might think that prohibiting the use of the N-word, while simulataneously allowing one of its franchises to be named the Redskins, is a clear case of hypocrisy on the part of the NFL.
But what do we mean when we say such a position is hypocritical? In part, we mean that the subject of such a charge is endorsing an inconsistent position, advocating for two contradictory conclusions about the same sort of situation. But hypocrisy is also a moral criticism. To say that one is hypocritical is to accuse them of advocating for a moral standard that they don’t actually endorse. Hypocrites aren’t merely inconsistent–they are also disingenuous.
Is it hypocritical of the NFL to prohibit the N-word while allowing a team to be named the Redskins? If ’Redskin’ is on par with the N-word in terms of being an offensive racial slur, it surely seems accurate to charge the NFL with inconsistency in prohibiting one while allowing the other.
But whether the NFL is hypocritical in the sense of failing to act in accordance with their own moral standards depends on why they advocate for prohibition of the N-word, while allowing a team to be named the Redskins. In the former case, the argument is grounded in the value of respect. As John Wooten, head of the Fritz Pollard Alliance that started the push for this rule change put things,”there is too much disrespect in the game.” Banning the use of the N-word, it seems, is based on a commitment to the value of respect for other persons.
Whether the NFL is guilty of hypocrisy, then, turns on whether the existence of a team named the Redskins is similarly disrespectful, whether in general or to Native Americans specifically. But is this name disrespectful? This question is more difficult than one might suppose. Typically, disrespect requires that individuals hold certain attitudes toward others, specifically attitudes of disregard or lack of concern for their interests. It’s pretty easy to see how using the word ‘redskin’ to refer to or describe a Native American person would be disrespectful.
But it’s less clear whether naming a team the Redskins is similarly disrespectful. Proponents of the name have claimed the opposite, that the name is a term of honor and respect. Opponents, primarily Native American groups, have disagreed, claiming that the name is a racial slur, the use of which demonstrates an insensitivity to the historical plight of Native American peoples.
If that’s the case, then naming a team the Redskins appears to involve the lack of regard for others that is characteristic of disrespect. And if that’s the case, then the NFL is indeed guilty of hypocrisy, because allowing a team to be named the Redskins is disrespectful in precisely the way they claim to oppose in prohibiting the use of the N-word.
It seems likely that the NFL is, indeed, guilty of hypocrisy. If prohibiting the use of the N-word is based on a commitment to the value of respect for others, then allowing a team to be named the Redskins is similarly disrespectful. To prohibit one while tolerating the other is, by definition, hypocritical. And the NFL should be embarassed by its very public inability to recognize that this is the case.
What do you think about this situation? Is the name “Redskins” disrespectful? If not, what’s the difference between the R-word and the N-word?
Tags: Applied Ethics · Social Ethics
February 14th, 2014 by Elijah Weber · No Comments
There has been a media uproar over a Danish zoo’s recent decision to euthanize a healthy giraffe, and subsequently feed the carcass to the zoo’s lions. The decision to euthanize Marius was based on concerns about in-breeding. Marius’s genes are already well-represented in the giraffe population of European zoos, so it was determined that in order to prevent in-breeding between Marius and one of his close genetic relatives, Marius should be euthanized.
Objections to this decision have been largely what one would expect–Marius could have gone to live in another zoo, or sold to one of the private individuals who offered to purchase him. In general, it’s not clear why euthanasia was seen as the best way to avoid in-breeding, given the wealth of viable alternatives.
An animal-rights advocate might put their opposition to this decision in the following way:
1. All animals have a right to life.
2. A right to life implies a right to not be killed unless there are sufficiently weighty reasons for killing.
3. Prevention of in-breeding is not a sufficiently weighty reason to kill.
4. Therefore, killing an animal to prevent in-breeding is a violation of the animal’s right to life.
5. Therefore, killing Marius the giraffe violated his right to life.
One might, of course, quibble with some of the premises here. Perhaps you think that only wild animals have a right to life, or you take issue with the notion that non-human animals have rights. Still, the basic complaint holds. Given the options, it’s not clear that the best way to prevent in-breeding was to euthanize Marius, and the director of this program hasn’t given much of an explanation of why this was thought to be the case.
What’s more troubling about this decision, in my view, is that it reflects a troubling ideology in the sciences, one that the administrators of this breeding program ought to have moved past by now. This ideology says that science is a “value-free” enterprise, outside of the realm of ethics and values, and instead concerned only with the ends of empirical discovery and effective technological development.
This ideology should be a relic of a distant past, when non-human animals were viewed as living machines, and the notion that there might be ethical constraints on what can be done to an animal wasn’t even a thought in the minds of most laboratory scientists.
Unfortunately, Marius’s tragic story tells us that this isn’t the case. His fate was ultimately decided by one factor: what would be best for the breeding program. There was little concern for Marius’s well-being, the loss of a valuable future that he might have enjoyed, or the possibility that euthanizing Marius might violate his rights.
Despite our best efforts, some scientists clearly still have a long way to go in recognizing the ethical dimensions of their work. On the bright side, the massive public outcry over Marius’s death suggests that the public is prepared to hold members of the scientific community who act on this ideology accountable for the consequences of their decisions. Perhaps Marius, though his death was tragic and avoidable, will be among the last of scientific ideology’s victims.
Tags: Applied Ethics · Social Ethics
January 25th, 2014 by Elijah Weber · 2 Comments
The right to freedom of speech has been invoked a lot lately. When Phil Robertson, the patriarch of the family that is the focus of the hit television show “Duck Dynasty,” made comments that appeared racist and homophobic, many people came to his defense by arguing that his suspension by A&E was a violation of his right to free speech. The former punter from the Minnesota Vikings, Chris Kluwe, was similarly defended after being released by the team because, he claims, he is an outspoken advocate of same-sex marriage. Freedom of speech even came up in a re-run of Seventh Heaven I was watching (that’s right, Seventh Heaven. It’s a good show!), when Simon defended his friend’s right to call his sister Lucy a “bitch,” on the basis of a right to freedom of speech.
Freedom of speech is surely an important right, and there are good reasons why speech enjoys the legal protections it currently has. The Founding Fathers wanted to ensure that people were able to openly criticize the actions of their government, without fear of retribution or censorship. The ideals of democracy seem to demand a strong right to freedom of speech.
But here’s the thing–the right to freedom of speech isn’t a protection against all the possible consequences of our speech acts. Rights impose obligations on others, and in this case, the right to freedom of speech obligates governments and their agents to refrain from interfering from nearly all the speech acts we might engage in. Freedom of speech is what allows us to stand in front of the White House and openly protest Obamacare, or demand that Congress impose stronger gun laws from outside the doors of the Senate.
But the right to freedom of speech doesn’t obligate employers to tolerate anything their employees might say, nor does it obligate individuals to allow others to speak to them in any way they wish. A&E and the Minnesota Vikings didn’t violate anyone’s rights by suspending Phil Robertson or releasing Chris Klume. Lucy’s mom didn’t violate anyone’s rights when she insisted that Simon’s friend apologize.
The right to free speech importantly protects individuals from certain kinds of governmental interference, but it doesn’t protect them from the other possible consequences of what they say. And if we really think about it, this is a very good thing. We don’t want employers to be required to tolerate racist or homophobic public statements by their employees. We also want employers to be able to discourage participation in certain kinds of activism while on the job, even if we think they are defending virtuous causes. And we certainly don’t want young women to be made to tolerate misogynistic language from their male admirers.
Freedom of speech allows for open criticism of our governmental institutions. It doesn’t protect individuals from the social consequences of their racist, homophobic, sexist, or untactful remarks. And this is a very good thing.
Tags: Applied Ethics · Political and Legal Philosophy · Social Ethics
December 6th, 2013 by Elijah Weber · No Comments
Now that the Affordable Care Act is being put into practice, it’s become common for critics of the new law to claim that President Obama “lied” about various aspects of this program. Sometimes, this claim takes the following form:
1. Obama said that the Affordable Care Act would make health care less expensive.
2. My health insurance is more expensive now that the Affordable Care Act has taken effect.
3. Therefore, Obama lied about the Affordable Care Act making health care less expensive.
It remains to be seen whether the Affordable Care Act will ultimately make health care less expensive. To determine this, we’d need to compare the cost of health care under the Act to what health care would have cost if this Act had not been passed. It’s probably a bit too early to determine how this analysis will bear out.
What many people seem to be doing when they draw this sort of conclusion is conflating the claim that Obamacare will make health care less expensive for everyone with the claim that it will do so for them. However, that a particular course of action will affect everyone in a particular way does not imply that it will affect each individual affected by it in that way. This criticism is based on an invalid inference.
Consider the following example. Suppose five friends are hoping to purchase tickets to a concert. They meet at the ticket retailer, and find a large line of people waiting for it to open. If they get in line now, it appears that they will be able to get the tickets, and the wait will be approximately two hours. They debate getting in line all together, or leaving just one person behind while the rest of the group goes out for a nice dinner. Ultimately, they are convinced to leave one person behind when one member of the group states that “it would be better for everyone if just one person stays.”
Did this person lie when they made this statement? No. The best plan for all involved is clearly for one person to stay behind, while the rest of the group goes out and enjoys themselves. Two hours is not a huge sacrifice for the person who waits, and that person’s discomfort is clearly outweighed by the enjoyment of the other friends. This plan really is better for everyone, even if one person affected by it is made worse off. It would be a clear mistake for the person left waiting to claim that the statement that convinced them to adopt this plan was a lie.
Complaints that because one’s own insurance costs increased, President Obama therefore lied about the way the Affordable Health Care Act would affect health care costs are similar to this sort of example. The fact that some peoples’ health insurance costs increased after the Affordable Health Care Act was instituted does not imply that the Act did not make health care less expensive. Whether that claim is true remains to be seen. In the meantime, the fact that your own health insurance is more expensive does not make President Obama a liar. Even if Obamacare has made health care more expensive for you, it may still make health care less expensive for everyone.
Tags: Applied Ethics · Medical Ethics · Social Ethics
November 11th, 2013 by Elijah Weber · No Comments
Over at Leiter Reports, a popular philosophy blog, guest blogger Thomas Nadelhoffer has raised an interesting argument about where the burden of proof lies in the debate over whether it’s morally permissible to eat meat. As he rightly points out, this debate is usually framed so that the burden of proof lies with advocates of vegetarianism. Meat-eating is treated as the default position, and it is up to its detractors to explain why this view is mistaken.
Nadelhoffer asks whether anyone can offer a justification for the practice of eating meat, apart from criticizing the arguments against it. As he again rightly notes, many justifications of this practice are based on false empirical claims or biased analyses. Although I’m torn about this issue myself, here’s my own attempt at the sort of justification Nadelhoffer is asking for:
1. The practice of eating meat gives pleasure to those who participate in it.
2. We ought to allow pleasure-giving practices, unless doing so either leads to levels of suffering that outweigh the pleasure these practices provide, or involves allowing serious injustices.
3. The practice of meat-eating does not lead to levels of suffering that outweigh the pleasures of meat-eating.
4. The practice of meat-eating does not involve serious injustices.
5. Therefore, we ought to allow the practice of meat-eating.
It seems to me that 1 is clearly true. 2 could probably be formulated more precisely, but I think it captures the sense in which we think pleasurable practices are generally allowable, so long as they don’t have certain morally problematic features. I don’t know what to say about 4, since I don’t really know how to apply the concept of justice to something like a cow or pig.
As for 3, while it’s most likely true that factory-farming leads to suffering that outweighs the pleasures of meat-eating, it doesn’t follow that meat-eating itself does so. It’s merely a contingent empirical fact that most meat-eating that brings about pleasure for the meat-eater involves factory-farmed meat. It seems plausible to me that we can condemn factory-farming, while acknowledging that agricultural animals can be raised in a way that isn’t overly unpleasant for them. In such cases, I think we can plausibly claim that the pleasure of meat-eating outweighs the suffering of the agricultural animal.
What do others think of this argument? If you are a meat-eater, does this capture why you think meat-eating is acceptable? For the vegetarians (or the philosophers), where has my justification gone wrong?
Tags: Applied Ethics · Uncategorized
September 6th, 2013 by Elijah Weber · 2 Comments
Recently, President Obama gave a speech that outlined his reasons for concluding that the U.S. ought to pursue a military response to Syria’s use of chemical weapons. You can find a helpful summary of these arguments here. One of those reasons was that a failure to do so would be detrimental to our national security. That argument can be helpfully formalized in the following way:
- We ought to act in ways that protect our national security.
- Responding to the use of chemical weapons by other nations with military force protects our national security.
- Therefore, we ought to respond to the use of chemical weapons by other nations with military force.
- Syria used chemical weapons.
- Therefore, we ought to respond to Syria with military force.
The first premise seems obviously true, and at a minimum it makes a claim that most people would likely accept. The fourth premise, though currently being disputed by Russia, China, and a handful of Internet wackos, is most likely also true, or at least not up for much dispute. So this argument depends on the second premise, that responding to the use of chemical weapons protects our national security. But is that really true?
Here are some reasons to think it is true. By not responding to a practice that is forbidden by international law, we might send the message that we aren’t terribly committed to the ideals of just war that we have publicly endorsed. This might lead to an increase in the use of chemical weapons, which might make us more vulnerable to these sorts of attacks here at home.
Additionally, and President Obama mentions this in his speech, a failure to respond to Syrian use of chemical weapons might make our allies in the region more vulnerable, which could also have an impact on our national security.
Clearly, there’s some basis for the claim that reasons of national security support some sort of response to Syria’s use of chemical weapons. But must this response be a military one? Are there perhaps other reasons that outweigh these sorts of considerations? Might attacking Syria actually be a greater threat to our national security than not responding?
Before we can fully evaluate this argument, these questions need to be answered. Since that likely won’t happen in the public debate itself, leave a comment and share what you think about it.
Tags: Applied Ethics · Political and Legal Philosophy
August 23rd, 2013 by Elijah Weber · No Comments
Although this is primarily a blog about ethics, I occasionally try to offer some friendly advice to people who are thinking about attending graduate school to study philosophy, actively attempting to do so, or actually doing so. As another academic year approaches, I find myself thinking about my own first year of grad school. I also find myself thinking about some of the things I’ve done, that many of my peers did not, which have put me in a position to actually complete my degree in a reasonable amount of time, and with some hope of actually obtaining a tenure-track job in a very competitive academic job market.
With that in mind, I want to offer three bits of advice to those of you who are just beginning the very long journey from first-year graduate student to PhD. As it turns out, many of the habits you cultivate early on will determine your long-term success in this field. Here are three things I’ve learned along the way, and that I think are essential to maximizing your graduate school experience.
1. Treat graduate school like a job.
Many of my peers have utterly failed at this, and they now find themselves out of funding, with very little to show for the massive amount of time that they have been graduate students. Being a graduate student in philosophy is not like being an undergraduate major. Figure out what sort of “workday” works best for you, and set your schedule accordingly. If noon to eight PM is best for you, that’s your workday. Come as close as possible to a forty-hour workweek, and anticipate needing to work “overtime” quite a bit. This is especially important once you complete your coursework, since you won’t have weekly meetings to keep you on task.
2. Develop yourself as a professional
It’s never too early to start thinking about presenting papers and submitting publications, especially if you are not going to be receiving your PhD from a top-15 philosophy program. You’ve got to do something to distinguish yourself on the job market, and one way to do that is to have a few impressive conferences and a publication or two. Many people wait until their last year of graduate school to start worrying about this, and that’s really much too late. Submitting to and presenting at conferences is time-consuming, and you’re going to be rejected a lot more than you’ll be accepted. You can’t simply rack up a bunch of impressive CV lines at the last minute, and this is doubly true for publications.
Each semester, plan to have at least one paper that is good enough to submit to conferences. It’s helpful if it’s in an area where you hope to focus your later research, but if you’re not yet sure what that area will be, just focus on submitting something that’s good. Submit frequently, and expect to receive a lot of rejections. Start small, with less impressive grad student conferences. These conferences have higher acceptance rates, and are usually very low-stress when it comes to presenting. Work your way up gradually to bigger grad conferences and smaller professional conferences. This way, by the time you get to your final year of graduate school, you’re ready to submit and present at large professional conferences, like the APA Annual meetings. Doing nothing for four years, then suddenly trying to attend every conference you possibly can doesn’t make you look professional, it makes you look desperate and unorganized.
3. Ignore what your least successful peers are doing (no matter how brilliant everyone thinks they are)
Every philosophy department has a substantial collection of graduate students who, for various reasons, are not progressing in their program. Usually, the problem is quite obvious–they don’t do much work, they focus on socializing more than producing good philosophy, or they simply aren’t all that bright. Success in academic philosophy requires the right mixture of talent and work ethic. Many people have the talent, but they don’t do the work. Others simply fail to appreciate that graduate school is career training–you are learning how to be a professional philosopher.
Philosophy has an odd mystique about it. People seem to think that you should do it because you love it, without worrying about making a career of it. Real philosophers, they seem to be saying, don’t worry about professional development or publications, they just want to uncover the truth for it’s own sake. But these things aren’t mutually exclusive.
Developing as a professional doesn’t mean betraying what you love, what’s really great about studying philosophy. Philosophy is a lifestyle, a way of being in the world. But academic philosopher is a job. It’s all too common for talented graduate students to become so wrapped up in the lifestyle that they completely neglect the requirements of their future profession. These same grad students, for all their talent and brilliance, often find themselves out of funding, short on options, and thus relegated to the de facto slavery of adjunct work.
Those talented grad students, who you will no doubt meet on one of your first days in your new department (because these folks show up to everything–hey, it’s better than doing their actual work) will try to sway you into all manner of distraction. Do your best to ignore them, and focus on copying the small number of grad students who seem to be completing their work on time, receiving professional accolades, and developing into real academic philosophers. These won’t be the “cool” kids in the department. In fact, you’ll rarely see or hear about them, because they don’t go out with the new grad students, or attend “meet and greet” events. They’re too busy becoming what the talented slackers of your department can only pretend to be. Follow their lead, and ignore the “cool” kids. In a few more years, they won’t seem all that cool.
Tags: Careers in Philosophy
August 2nd, 2013 by Elijah Weber · No Comments
A video of Philadelphia Eagles wide receiver Riley Cooper using an extremely offensive racial slur has recently gone viral on YouTube. You can see Cooper’s appalling behavior here. If you haven’t seen it, or prefer not to witness such things, I can sum it up very simply. Cooper used the “N” word, and not in a manner that indicated he was greeting or referring to a friend of his. Cooper used this word in an aggressive, threatening manner that represents all that is vile and putrid about it.
I tend to focus a lot on ethical issues that arise within the context of professional sports. I do this for many reasons. First, sports is something that many of us enjoy. We pay attention to sports. Often, many of us know more about what’s happening in the world of sports, or the lives of celebrities, than we know about the “real world,” for lack of a better term. Sports unifies diverse groups of people. It also serves as a microcosm for the sorts of issues that come up in our own everyday lives.
The Riley Cooper incident isn’t terribly interesting in its own right. Cooper used obviously reprehensible language, and now he’ll pay the consequences. The only real debate concerns how serious those consequences will be.
But perhaps that’s not the only debate that this case calls to mind. In recent months, I’ve asked questions about the phenomena of being offended. What happens when we are offended? What makes some situations offensive, but not others? What sorts of features bear on whether a particular state of affairs is offensive?
It’s this last question that Riley Cooper’s case can help us think more about. Riley Cooper is white. It’s possible that had the same words been uttered by a black person, the level of moral outrage would have been the same. But I doubt it. It seems that at least part of what’s offensive about what Riley Cooper said is that it was a white man who said it.
And yet, that can’t be the case if we agree with Jason Whitlock’s description of the Riley Cooper video. In commenting on this incident, Whitlock also shared the video, but he prefaced it with this warning: Language is universally considered to be offensive.
But if that’s true, then it shouldn’t matter that Riley Cooper is white. If the language is universally offensive, it shouldn’t matter who utters it. And yet, it seems that it clearly does matter that Cooper is white. Perhaps the language is universally offensive, as Whitlock suggests. But perhaps what makes Cooper’s words offensive is not merely what he said, but who it was that uttered them.
What do others think about this case? Are Cooper’s words universally offensive, as Jason Whitlock claims, or do his words only become offensive when they are uttered by a particular kind of person?
Share your thoughts by providing a comment, or tell a friend about this post.
Tags: Applied Ethics · Philosophy of the Emotions · Uncategorized
July 26th, 2013 by Elijah Weber · No Comments
Recently, former NL MVP Ryan Braun was suspended for the remainder of the baseball season, on the basis of allegations that he used performance-enhancing drugs. Seemingly, Braun’s acceptance of this punishment amounts to an admission of guilt. And yet, Braun is being widely chastised for his behavior. Why has the backlash been so venomous, especially in comparison to other athletes who have been punished for PED use?
There is a lot to discuss about this case, and I won’t try to identify every interesting ethical aspect of this issue here. Instead, I want to focus on something important that I think we can learn from the way people are responding to Braun’s situation.
Most of us agree that cheating is morally wrong, and we probably think lying is morally wrong too. Braun is guilty on both counts. But lots of other athletes have cheated, and many of them lied about it, at least until they were caught in some way that made their crime undeniable.
But Braun is guilty of something more. Last year, Braun tested positive for elevated testosterone levels. He fought being suspended, successfully, on the grounds that his sample had not been handled properly. Braun went so far as to suggest that his sample might have been tampered with. The courier who handled the sample was fired. The arbitrator who ruled in Braun’s favor was also fired. At least two lives were destroyed in the aftermath of Braun’s positive test.
Braun is guilty of even more than causing undue harm to others. In responding to his positive test, Braun issued a very emotional, very convincing statement of innocence. He swore, on all that matters to him, that he had not done this, whether intentionally or unintentionally. He spoke of taking the “high ground,” of his impeccable character that effectively ruled out the possibility of his guilt. Braun was so convincing, that his close friend, Aaron Rodgers, wagered an entire year’s salary that Braun was telling the truth.
Every bit of that self-righteous, grandstanding statement was a lie. And that’s why Ryan Braun is not like other athletes who have lied about PED use. Braun didn’t just lie, or cheat. He deceived us into believing that he was an exemplary person, whose exemplary character was simply too unbreachable to allow for unfair play.
Braun made himself out to be a saint, a martyr, and a victim. And we believed him so completely that we began to wonder about our own character, and whether we might have done something terribly wrong here.
This is what we can learn from Ryan Braun. We don’t like liars. We don’t like cheaters. But what we dislike most of all, what seems far worse than the cheater who lies, is the cheater whose lies are self-righteous, whose professed innocence is shrouded in assertions of their exemplary moral character.
It’s wrong to cheat. It’s wrong to lie. But it’s far worse to do so under the guise of being a good person. Ryan Braun is not a good person. And that’s why his crimes seem so much worse.
Tags: Applied Ethics · Environmental Ethics · Ethics and Sports · Uncategorized