Why Religious Texts/Authorities ARE to Blame for Violence

•2009/11/09 • 1 Comment

Thanks to Nidal Hasan, the national echo-chamber is yet again clumsily tackling the perennial question of whether this or that religion, or this or that religious text, actually justifies violence and evil. We’ve got everything from people pre-bracing for the inevitable anti-Muslim backlash to a Forbes columnist angling to turn “Going Muslim” into the next “Going Postal.” Oy.

Per my policy of not trying to analyze events I can’t do anything about until there’s like, some actual information to work with, I don’t have much to add on the subject of Hasan’s motivations (he’s talking, so we may find out more soon) and their broader implications. I do, however, find the general question of ideological culpability fascinating.

Thoughtful religious people object, I think correctly, to the accusation that any major religion directly supports or legitimates violence, let alone indiscriminate spree killings. They point out that religious texts are complicated: that people can often read very different meanings out of them. That understanding the “true” message of thousand-year old faiths takes more than just scattered pullquotes. That holding people who read a text in a non-literal sense responsible for the interpretations of fanatical literalists is unfair.

And I actually agree with these sorts of defenses. But I still don’t think that lets religious texts, or those that imbue them with special authority, off the hook.

Because the problem I have is less with the specific content of any given holy book or papal bull, but rather with the general idea (one shared and supported sometimes by even very progressive religions) of promoting the primacy of any central religious text or authority in the first place.

What’s problematic is the idea that HERE is wisdom: that THIS one source contains special and uniquely authoritative insights into morality and truth. That these stories, sutras, stanzas, gospels: that they alone are the ones you need to study and understand in order to figure out what is right and good and important in life (if you’re only going to see one movie this lifetime: make it the Passion of the Christ!). It’s that attitude itself, in nearly any form, which is dangerous and objectionable.

Because of course people can and will read out what they want from these texts. That’s true of nearly any source of information. The key difference with a religious text, however, is that indeed people read out what they want… but then further feel instinctively justified in thinking that the entire moral weight of the UNIVERSE stands behind whatever they decide the text is telling them to think or do.

That idea is and should be regarded with deep discomfort by any fan of modern democratic/liberal ideals. The core principle of a healthy civil society is that there IS no final authority, never any place where anyone can stop and say “well, I’ve got all this figured out now, no need to ask any further questions, or keep consulting my conscience, or keep talking it out with other people.” Likewise, there is and should never be any one book or selection of insights on human existence that is regarded as the final word on anything, especially when it comes to troubling moral quandaries.

I would never claim that religious conviction is solely to blame for any atrocity, but the habits of absolute conviction can certainly help grease the wheels.

I highly encourage people to read the great religious texts: all of them (they’re fascinating, troubling, invigorating, and often indispensable to anyone seeking greater understanding of human thought and history). And while not a believer myself, I’m not against religious belief per se. Nor am I against people taking strong moral stands. I just don’t think anyone should delve into any text while encouraged to believe that it’s the last or most important thing they’ll ever read. Or, worst and most dangerous of all: that once they appreciate THIS one text, that all other voices, from other works, from other minds, from other perspectives, from even their own conscience… can henceforth be ignored. Or silenced.

Advertisements

Andrew Sullivan vs Andrew Sullivan: Torture vs. Theodicy

•2009/09/23 • Leave a Comment

Blogger Andrew Sullivan finds torture to be ethically abhorrent, even as a means to a supposedly greater end. His arguments against it are passionate and compelling. That’s laudable.

But this laudably resolute stance is precisely why his occasional forays into theodicy (the debate over whether all the suffering and misery in the world is consistent with a perfectly good and powerful God) are so baffling. Because, really, he seems to be coming down on different sides of the same argument, depending on whether the chief architect of the suffering is named George Bush or Jesus.

Now, there are probably some very good reasons why one wouldn’t want to trust the moral instincts of the mundane and flawed George Bush over those of a hypothetically perfect and omniscient savior. Unfortunately, Sullivan has never premised his arguments against torture on the idea that the person orchestrating the suffering would be insufficiently wise or trustworthy.

Indeed, when it comes to torture, it’s never been good enough for Sullivan when people argue “well, there are pressing reasons of National Security that you may not understand as to why this was necessary.” Sullivan believes that there are no known reasons that can justify the use of torture: it’s a means that’s just too socially corrosive and unethical to be worth nearly any end. And if you really claim otherwise… if you have some as yet appreciated moral innovation: well then you’d better spit it out. Until then, we should consider torture monstrous.

And yet, when it comes to the much greater suffering of, say, earthquake or tsunami victims, Sullivan seems perfectly happy to throw up his hands and declare the impossibly cruel (and note: designed to be cruel) workings of the natural world all to be a grand, possibly even wondrous, mystery. At times he even comes close to implying that it somehow maybe enriches us all as humans to have a natural world that wreaks such havoc so indiscriminately.

This argument is, however, clearly wrong. Even if a base level of pain and suffering can be said to play some important purpose in our lives, the actual amount of suffering clearly exceeds it quite regularly, and with no appreciable benefit to anyone.

One way we can be sure of this is simply because levels of suffering are vastly different life to life, era to era. The majority of people in, say medieval France once lived lives characterized by truly awful suffering. But nowadays the majority are reasonably comfortable. Certainly there are still tragedies in the lives of modern day Frenchmen, still plenty of pain. But huge quantities of suffering have been eliminated from the lives of an entire society with no appreciable detrimental affect on anyone’s soul. On the contrary: far more people are able to live long enough and well-educated enough to contemplate and explore and appreciate deeper meanings instead of spending their aching days starving and diseased. Most people alive today can’t even imagine, much less appreciate, how much better off they are.

Some of these advances in misery reduction were, of course, due to improvements in human society and political culture. But some were also due simply to improvements in technology: i.e. the natural world was altered to make it much less hellish on human beings than it was in its original “designed” state. So why was the previous status quo necessary in the first place? Why not larger crop yields and less disease from the start, if we can live deep and meaningful lives just as easily without, say, polio?

The fact of the matter is that we did not (and perhaps one day, thanks to human ingenuity, will not) have to live in a world where earthquakes are as common as we are, or have such devastating consequences for so many. But we do. And if an actual thinking, feeling someone specifically chose this exact state of affairs (not simply a world containing some suffering, but a world with a very specific and clearly excessive amount), then that being would have some serious ethical explaining to do.

All of this seems to be an exceedingly clear and unavoidable moral conclusion, and I just don’t feel enriched or transported (as Sullivan claims to be) by denying or moderating my stance on it. Making moral excuses is bad. And doing so when we don’t even know how the behavior could possibly excused or redeemed, even in theory, is deeply compromising and degrading.

Andrew Sullivan has, many times, attacked people for being too chicken to call torture torture, and too deeply in moral denial to call it evil.

Why does he have such a hard time applying this ethical standard to the physical world?

So I Got Me These Shoes…

•2009/09/21 • 1 Comment

So, I do this thing: I read great books, and I get so worked up that I have to go out and engage in whatever they’re obsessed with (thank goodnees that I wasn’t fully on this kick when I read Carl Zimmer’s “Parasite Rex”). But given that I recently finished “Born to Run,” by Christopher McDougall, I decided on a new birthday gift to myself: some Vibram Five Fingers, the KSO (Keep Stuff Out) edition.

vibrams

In many ways, Vibrams are essentially the running shoes you buy to rid yourself of shoes forever: their sole purpose is to prevent puncture damage to your feet… and nothing more.

And it’s the “nothing more” that makes all the difference. The theory here is that all the things that traditional running shoes try to do for you are, in the end, a disaster. All that cushioning, the extra protection, the soothing escape from the full impact of foot on ground: all of that is a denial of the human foot’s true potential, its ability to sense the terrain and adjust your gait accordingly. Sure: you deaden the immediate pain of impact, but this simply comes at the expense of causing the long-term knee and foot damage that comes from running with a heavy heel-strike (counter-intuitively, people that wear expensive and well-cushioned running shoes tend to cause MORE impact damage to their knees and legs than they would if they ran in flats: our natural inclination is always to push harder than we actually need in order to seek out a hard and stable surface, thus negating the whole point of shoe cushioning).

Since buying my personal pair, I’ve taken my Vibrams (well, taken my bare feet, really) out on 4 miles of running and a few hours of trail hiking. And I’m already sold on these things. Running in them changed my gait almost instantly: from a pounding heel-centric affair to a light, wicked little step that feels almost catlike. You can articulate and bend your foot the way it’s meant to move, rolling through a step instead of simply pivoting: the very way that solid running shoes try their hardest to prevent. And your toes start to work like little radar antennae again, flexing and creeping around everything you encounter and then pushing off again as you pass it by.

And the hiking… hell, just the walking around, is fantastic. Because Vibrams are essentially built purely to protect your feet from damage, not to trap and cushion them, I can feel nearly every surface I step on in exquisite detail: every pebble, every tree root, every crevice. And that’s a good thing, I’m finding. A really, really good thing. It’s at once both like a constant foot massage and a flood of new and highly detailed information that I’d never even imagined I was missing out on before. When you’re basically just walking around, your feet are really the only real tactile sense you have actively working for you. Wearing heavily cushioned running shoes ends up being nothing short of putting a blindfold over your eyes.

Of course, “barefooting” isn’t just something you can take up after a lifetime of running around in cushy luxury. It takes time to relearn your reflexes, to bulk up the many atrophied muscles in your feet, to get back the “fear” that allows your gait to go light and quick, instead of brutally hard and careless.

But honestly, this process is of re-acclimation has been, to be blunt but family friendly, downright pleasurable.

I recently described the feeling as like having my legs float around on a “delirious cloud of numby butterflies.” And that’s pretty much the best I can come up with. While the feeling is pretty much what you’d expect from working any long neglected muscle and having it end up flat-out sore, there’s something special about the super-sensitive feet. Our feet are uniquely packed with long-neglected nerve endings: they’re simply crying out for some attention. So the stress of barefoot running/hiking is almost as much a sense of relief as anything else.

I’m gushing, sorry. I haven’t had my Virbams long enough to really review them as products, per se. They’ve worked so far as advertised: allowing me to plod around essentially barefoot but without the fear that a nail or spiky rock is going to shoot through my foot without notice. For me, the real test is their durability: how long the traction can last, how long they can go without succumbing to foot sweat and bacteria stank. That all remains to be seen.

But for now, I’m wandering around feeling liberated and wondrous. The shoes look a little silly, I can’t deny that. But I’ve never much minded looking silly, and the sheer pleasure of escaping the smothering embrace of the traditional running shoe is more than worth the stares. Especially when those stares so often quickly turn into questions and discussions on the existential nature of feet.

So, anyway… I got me these new feet

Letting Go of the Digital Past

•2009/09/18 • Leave a Comment

Technology evangelist Robert Scoble is worried: the past is slowly, and sometimes not so slowly, slipping away from him. Online social media services archive imperfectly. Photos vanish into unreadable or corrupted formats. Logins, and even entire business enterprises, pass away without fanfare. And he’s got tips to help you plug the leaks.

But let me put in a good word for impermanence. Things seeping away, getting lost in time, imperfect legacies: that’s just part of what the past is all about. You lose stuff. That’s how human memory works too: it slowly fades out or becomes distorted. There may even be good psychological reasons for not fighting the process (or at least not becoming too anxious about fighting it), a case for not seeking perfect fidelity. At the very least, the loss of data, the maddeningly just out of reach face or song or quote: these are experiences just as poetically valuable and as interesting to me as some idealized total recall.

Scoble in particular mourns the loss of his baby’s first cry, vanished into the digital ether with the Twittergram service he recorded it on. I can see how that might be frustrating on some level… but on the other hand, I’m not sure that it’s ever really worth too much effort or regret trying to be completist when it comes to memorabilia from your own life. You can share a baby’s cry with the grown man it became, and that might well be interesting in its own right… but really, it’s not really going to reproduce the moment you, as parents, first heard it.

And memories are often paradoxically more powerful when they’re unaided. Me, I can’t quite remember the face of the first girl I ever really had a crush on. I know pictures exist (though none online), and maybe someday I’ll look back. Or maybe I’ll bump into her again someday. But the face will still just fade away again. And that unformed image will swim at the edges of my memory, tickling all sorts of mysterious and mixed emotions from equally inaccessible past moments. And maybe that’s right where it belongs.

Still, I might nevertheless try to, say, back up my tweets someday. But as witty and utterly unmissable as my ingenious 140-character insights are… having them all face a tragic, accidental deletion someday is far less frightening to me than the (frankly more likely) possibility of saying that one stupid thing that will be then remembered for all time. I mean them to be light and ephemeral.

And it’s certainly true that not every brilliant thing I’ve said has yet been fully appreciated and celebrated by the entire world. But on the other hand: they each had their moment, had their chance, and it’s better to move on: there’s so much still to be said.

Plus, a lot of what I’ve enjoyed with twitter over the past year is how the immediacy of social media can quickly become obscurity when you try to rewind back through it all. You’re either living in the stream of information, quips, counter-comments or else the whole enterprise becomes more and more culturally unintelligible: you lose your grip on how the fads, in-jokes, lingo and so on change over time. Every moment spent trying to reflect or relive is time not spent participating in the present.

There’s a deep insight there somewhere. We all really do need to accept that there are people, experiences, information that we’re just going to miss out on or lose touch with. Learning to let these things go without overanxious regret is, I think, central to finding happiness in a world with too much information and far too many choices. Life is finite, but the present is always both infinite and unrepeatable.

So I think I’m willing to accept, maybe even embrace, whatever digital lessons in data loss that serendipitously come my way. I can always make more.

Obama Hate Missile

•2009/09/17 • Leave a Comment

So the Obama administration has decided not to continue building controversial missile installations in Eastern Europe.

I haven’t heard anyone offer any sufficiently coherent or informed discussion of the strategic and political calculus on this decision as a policy move so far… anywhere.

But it IS a story that does contain concepts like “missiles” “less missiles” & “Obama”… so of course it MUST, without any further analysis or knowledge necessary, prove that Obama is a feckless, evil leader who hates missiles and loves Russia so much he wants to marry it.

Welcome to American political discourse, circa 2009.

Saving Preemies: Why Healthcare-Heroism Can Be Hell

•2009/09/15 • 4 Comments

One of the week’s stable of “government healthcare systems are evil” headline hits on the Drudge Report was this heartbreaker about a mother named Sarah Capewell pleading for UK doctors to save her 21 week-old preemie.

The doctors, playing right to Drudge’s surface-level script of heartless, conscience-free bureaucrats, refused, allowing the two-hour old infant to die. The horrified mother is now suing for a change in the country’s medical guidelines, demanding that all premature infants be afforded emergency care, at the very least on maternal request.

But as always, look a little deeper, ruminate a little more, and the story becomes one of those deep ethical dilemmas that’s not at all well-served or represented by a passing partisan summary.

What we have here is a clash between idealistic heroism and the statistical reality of what a given set of “standard” policies will result in. Capewell and her allies, like MP Tony Wright, point to rare cases like that of Amillia Taylor, a preemie that survived after being born at only 21 weeks herself (the doctors in that case misjudged her gestational age, and made efforts they otherwise would not have).

But therein lies the dilemma: they’re pointing to a miracle to define what should be regular practice. It is always possible, as Capewell alleges, that medical science has misjudged this: that technology has improved since the last time anyone checked and a higher percentage of preemies could survive if every last intervention was attempted in every case. That the Taylor case represents not a fluke, but a hidden possibility for improvement. Maybe, but at this point, decidedly unlikely.

Instead, as even the Daily Mail article notes, almost all infants born prior to 23 weeks (the recommended cutoff in the UK) will not survive no matter what is done: heroic intervention in such cases results in prolonged and painful infections, organ failures, and so on: a very brief life, artificially extended (if even that) only so that the infant can suffer a little longer. Even the later-term preemies who do have a chance face the likelihood of severe physical and mental disabilities along with a similarly grim long-term prognosis.

And yet still: some do occasionally defy the routine. Some, for whatever happy set of circumstances, might even make it mostly whole and healthy. There’s always a chance, no matter how slim, that we could be wrong about this one case. Or that some ingenious medical discovery or advance will, in the nick of time, suddenly swoop in and make things better. We can’t know if we don’t try, right? And that’s compelling enough for some. Can’t we make a policy that reflects hope? Affirms life?

But focusing on extremely rare cases and ignoring what that might mean for the conventional ones is deeply problematic as a guide to what’s right. Doctors, who see hundreds of cases in a year, and entire medical systems, who treat hundreds of thousands, have a very different perspective. They can put actual faces to the overwhelmingly larger number of preemies and families that don’t, won’t, and can’t make it: faces that loom just as big as any cherry-picked healthcare headline. They know that clinical-sounding words like “futile care” represent unbelievable amounts of suffering. That sometimes pouring on every last drug or piece of medical technology doesn’t “affirm life” at all: it tortures and mangles it, distills it down to a mere quantity of heartbeats, regardless of the physical or emotional cost to purchase each additional one. And doctors know that there is an often brutal trade-off between making efforts to briefly prolong life at all costs and providing peace and comfort through the inevitable (painkillers like morphine, for instance, can provide comfort but often at the cost of weakening respiration and shortened survival).

Those are the costs of making it standard practice to hold out for miracles. And there’s just no way anyone should be glib about them. Capewell and every other mother has every right and reason to pray for special intervention in their own case. But we just can’t have the occasional miracles without all the cases in which we’ll expend incredible amounts of time, effort, money, and, most of all, suffering… only to end up with a worse outcome. As far as I can tell, that’s exactly what Capewell and her allies are ultimately calling for: the inevitable price of an expanded policy of unrealistic heroism. Thousands and thousands more mangled, abused preemies, doomed nevertheless to die, oftentimes without measures that could lessen the pain. All in the hope that one or two might survive a little longer.

I’m still not sure it’s an easy ethical call. But take a long, deep breath before calling anyone a monster for coming down on either side of it.

Addendum: Shouldn’t mothers, regardless of anything else, have the final say, even to the point of demanding what is likely futile care? Yes and no. They certainly have the final say, outside of the realm of criminal child abuse, over what is done to their children. And it would be hard to deny anyone their emotional need to never give up.

But just because they can demand doesn’t mean that other people are required to provide. Neither doctors nor medical systems are legally or ethically obligated to provide care that they believe to be futile, especially when doing so violates their own consciences. This is, again, just one of those sticky questions where someone has to make a hard call about what the standard of care is going to be.

Update:
Reality Rounds makes the same case I have, but with far more immediate authority: they know, in graphic detail, what even post-viability preemies often have to endure. Don’t look away.

I have cared for many infants at the edge of viability. It is always emotionally draining. There is no justice to it. The extreme measures involved to keep a 22-23 week infant alive is staggering, and it is ugly. I once had a patient who had an IV placed on the side of her knee due to such poor IV access. When that IV infiltrated, I gently pulled the catheter out, and her entire skin and musculature surrounding the knee came with it, leaving the patella bone exposed. I have seen micro-preemies lose their entire ear due to scalp vein IV’s. I have watched 500 gram infants suffer from pulmonary hemorrhages, literally drowning in their own blood. I have seen their tiny bellies become severely distended and turn black before my very eyes, as their intestines necrose and die off. I have seen their fontanelles bulge and their vital signs plummet as the ventricles surrounding their brains fill with blood. I have seen their skin fall off. I have seen them become overwhelmingly septic as we pump them with high powered antibiotics that threatened to shut down their kidneys, while fighting the infection. I have seen many more extremely premature infants die painful deaths in the NICU, then live.

No Shelter for the Powerful: Death Penalty, Torture, and Willingham

•2009/08/31 • 3 Comments

Everyone is, and should be, talking about a recent New Yorker article by David Grann documenting collapse of the case against Cameron Todd Willingham, accused of murdering his own children. Unfortunately, unlike many of the people spared death row after being exonerated by DNA evidence, it’s now far too late for Willingham: he received a lethal injection almost 6 years ago now.

It’s one thing to accept that having the death penalty might risk the occasional execution of an innocent person. But if Willingham is indeed found to be innocent (or at least that the basis of his indictment premised on phony forensics and false
testimony) it won’t simply be a case where a careful system tragically got one wrong, and that it should maybe think about some additional safeguards. It will instead be a case in which outright negligence and sheer disinterest in the cause of justice at nearly every level of the Texas justice system all conspired to bring about a predictable result: the legally endorsed murder of a human being.

In a perfectly just world, the members of Texas’ parole board and the state’s governor himself (Rick Perry) would face charges of, at a bare minimum, criminally negligent homicide. Any private citizen in an analogous situation, given power over life or death and then acting with such sheer indifference to that duty, would face that sort of consequence.

But, of course, we instead live in a world where the powerful and well-connected are now openly expressing outrage at the very idea of ever being subjected to such oversight or remonstration. Oversight, investigation, justice: they claim such things would make them too scared to do what needs to be done, whenever they deem it necessary: torture, murder, and death warrants signed without even a glance at astounding circumstances that demand at least a sincere consideration of clemency.

And it’s high time that we declared such evasions unacceptable. We have a legal system whose central purpose is to make people conscious of the consequences of their actions: to make sure that even when they care nothing for their victims, they at least fear for their own skins.

Texan officials found it extremely easy not to empathize with Willingham and thus ignore their duty to carefully reconsider his guilt. All they needed to do was to imagine, in turn, that he himself had acted without any empathy towards his victims.

But the law exists condemn to both. And if Willingham was in fact innocent it’s doubly abhorrent that he will be the only one to pay a price.