The Blog

"My Therapist Talks Too Much"

People who say “I feel”

I agree with Geoff Nunberg, the “I feel” clause is not the end of rational discourse, but what he doesn’t say is what bugs me….it’s just inaccurate.  To say “I feel like taxes are too high” is not correct, that’s not a feeling, it’s an opinion, a thought.  It’s like saying “I know in my heart…” no, you don’t know in your heart, it’s a muscle that pumps blood and doesn’t know anything. Yes, we all understand the meaning of an “I feel” sentence anyway, but to say “I think” is more accurate.  

In psychotherapy, I work with people to be more aware of the difference, not in order to be better grammarians, but because people often are not aware of how they feel.  They know what they think, and confuse thoughts and feelings as though they were the same.  Feelings are harder to articulate, but it’s often beneficial to put the effort into figuring out how to do so, so that we really do know how we feel.  I may “think” taxes are too high, because I “feel” insecure about my finances and afraid that I may not be able to support my family in the future.  

While Geoff argues people use it as a qualifier no different than others we use to indicate an opinion, I do think he lets people off the hook a little too quickly.  People don’t use “I feel” to shut down rational discussion, but they do use it as a way to avoid confrontation.  It sounds softer and less confident than “I think,” which helps people to express themselves with less fear of offending others.  In general I don’t have a problem about this, I think being more humble and less strident is in general a pro-social behavior, but we should be mindful of these subconscious expressions.  Sometimes people won’t take us as seriously because we soften our opinions with “I feel.”  If you want to sound more confident in yourself, use “I think.”  It’s also probably more accurate, anyway. :)

Enough with the spanking already!

It’s time for another little chat about spanking children.  Yet another study has come out about the harmful effects of spanking, but this one is an enormous meta-analysis covering 160,000 child subjects and decades of research.  A meta-analysis is simply a study that collects together the results of many studies and analyzes them to get a sense of what the “big picture” is telling us about a given topic.  In this case, the big picture is that spanking is not good for children!

The press release summarizes the findings succinctly: “The more children are spanked, the more likely they are to defy their parents and to experience increased anti-social behavior, aggression, mental health problems and cognitive difficulties.”

Many parents who spank find this hard to accept, even though it’s not a new finding and psychologists have been teaching this for decades.  Indeed, the study found that about 80% of parents worldwide still approve of spanking!  Why?  My experience is that many parents (even those who do not spank) say things like, “it really taught me not to do that!”  I even heard from a Black mother, “Black children need whoopings more than White kids do.”  The bottom line is, most parents still spank because they think it works.  

At best, spanking results in a brief, short-term reduction in an unwanted behavior.  But research shows us that chronic spanking only results in an increase in problem behavior, in part because children become accustomed to being spanked.  And if parents who spank are really honest with themselves, the primary reason they do it probably has nothing to do with intentional parenting.  There is no well-considered theory of parenting at work here, no attentive monitoring of whether spanking works or does not work over time for their children, no experimenting with other parenting techniques to see if something else works better… no, I think, for the most part, the main reason why parents spank is simply about anger.  They are angry at their children, and they hit them just as many adults would like to do toward other adults who anger them, but can’t, because it’s illegal or because the other adult might hit them back (or worse).  The justification that the intent is to improve a child’s behavior is often really just a post-hoc rationalization, a way to make it okay to do something that is not okay in any other circumstance, with any other human being that is not one’s own child.  Most parents who spank really do think they are doing the right thing, but generally they are not aware of or willing to use alternative forms of shaping behavior in children, and often are not aware that their own anger is the primary reason for why they choose to spank as opposed to using words, redirecting, distracting, reinforcing an alternative behavior, etc.  

Part of the reason why spanking is the default discipline choice for so many parents is that it is so easy.  Spanking is fast, unsophisticated, results in an immediate effect, and requires none of the calmness, patience, creativity, or verbal intelligence that is often needed for more effective parenting behaviors.  That’s the best defense I can give for spanking.  To those adults who think being spanked helped them to behave better, I would argue that it may have, but that improvement may have come at a cost, and the same improvement could almost certainly have been achieved using alternative, non-violent methods.  

I don’t spank my child.  I never have.  That’s not because she is perfectly behaved, in fact she can do all the things that result in spanking for other children.  If I did spank her, I might even think it “worked” because for the moment she stopped doing whatever naughty thing she was doing, but as a psychologist I know that spanking carries with it far more risk, and is far less effective, than alternative parenting behaviors.  I also don’t spank because of the Golden Rule:  I teach my daughter that hitting is unacceptable, so it would be hypocritical of me to hit her (and all children recognize hypocrisy at some level.)  I try to treat her the way I expect her to treat me and everyone else.  To do otherwise sends the message that violence in certain circumstances is acceptable, as well as the message, “do as I say, don’t do as I do.”  Nobody earns respect that way, and I think it is the reason why parents who themselves were spanked choose not to spank their own children.   

Spanking is like smoking…even if there’s a perceived “benefit” to doing it, there’s always a better way to achieve that benefit, and the benefits never, ever outweigh the harm.  There’s just nothing healthy about it.  Angry parents need to learn to calm themselves, lazy parents need to try harder, and unskilled parents need to learn better parenting skills.  There are many great books out there on parenting, and many mental health professionals who are happy to help develop better ways to discipline.  It’s not okay to spank, slap, hit, pinch, or otherwise physically hurt our children, as much as they may drive us crazy or make us angry.  It’s time to stop.

Poverty and cellphones

Sometimes I hear people say that “poor” people in America are really just spoiled because objectively their standard of living is so high–they have expensive luxuries like TVs, computers, refrigerators, and cellphones; objects that would never been seen in the poorest areas of the world.  “How poor could they be?” is the implied question.

Well I have a different perspective on this issue after listening to an interview with an author who wrote a book about the largest refugee camp in the world in Kenya.  This camp has been around for decades, and mostly supports Somalis who fled the poverty and chaos of Somalia only to find poverty and hunger in Kenya.  Funding for food in this camp of half a million people is so limited that despite UN involvement, most are chronically malnourished and some still die of starvation.  Most live in shacks or rotting tents.  There is no plumbing or sewage system.  It is a shocking level of poverty that they cannot escape because they cannot leave the camp.  

And yet, the author notes that in this environment many people still have, of all things, cellphones.  He says that they would “go hungry for a week” if it meant access to Facebook.  Why? Because cellphones mean connection to the outside world, the ability to communicate with family, and to explore the world virtually because they cannot actually explore anything beyond the camp. 

What this example teaches us is that the psychology of poverty does not simply revolve around basic needs like clothing, food, and shelter.  Even a starving person has priorities beyond just the need to eat–the need to connect, to share, to express, to explore, are just as important, because these are qualities of being human and not just an animal fighting to survive.  The fact that many poor people in America have such apparent luxuries as cellphones and Internet access is not an indication of wealth or misplaced priorities, it is a statement about human nature, that the definition of “basic needs” goes beyond those that keep us alive, and extends into those that help us to feel human.

Two reasons not to believe the argument that mass shooters will always find a way.

Since the recent Oregon mass shooting, one of the regular arguments against gun control has been brought out again–that even if we adopted stricter gun laws they would not stop a determined criminal.  The theory is that if a person really wants a gun they will find a way to get one.  I’m going to restrict my analysis of this argument to the phenomenon of the “mentally ill mass shooter” and not the hardened criminal or terrorist.  

Of course, without testing the effectiveness of gun controls there is no way to know for sure what the outcome would be.  However, there are at least two reasons to suspect this logic.  First is that if this logic is extended, then it would suppose that in countries where gun laws are stricter, the rate of mass shootings by mental ill people would still be at least as high as in the US, if not higher (because there would be fewer armed civilians around to deter the shooters–another pro-gun argument). And yet, this is not the case.  The Wall Street Journal wrote recently that the US ranks #1 in mass shootings in the world, and that countries with the largest numbers of gun ownership tended to have the most mass shootings.  Yes, some nordic countries have higher per capital rates of deaths from mass shootings, but this is largely due to skewed statistics–these countries had only a few mass shootings, but they happened to have higher death counts than those typical of the US. 

The other reason has to do with an analogy to suicide.  Many mentally ill mass shooters are suicidal–many of the mass shootings in the US end in the shooter committing suicide.  However, the analogy may still hold true even for those cases where the shooter is not suicidal.  One of the myths of suicide is that “a suicidal person determined to kill himself/herself will always find a way.”  Yet we know from research on suicide prevention that this is NOT accurate.  Yes, a person determined to kill himself or herself can always find a way, and those highly determined people are very hard to keep safe.  However, the vast majority of suicidal people do not commit suicide when the means to accomplish the goal are not readily available. This is partly because the impulse to commit suicide is usually brief, over a matter of hours or a few days, and so if there is not an easy way to do it the impulse usually fades before the person can develop another plan.  

Now, many mass shooters are not impulsive, they have planned their rampage for days, maybe weeks.  Many have gradually stockpiled weapons and ammunition.  So an argument could be made that these people might not be so easily dissuaded as the typical suicidal person.  This is possible.  However, the importance of ready means to the act of committing suicide or homicide should not be underestimated.  The Oregon shooter apparently found it very easy to obtain many guns, as have other mass shooters.  If it were not so easy, if, for example, a mentally ill person wanted to buy some guns but the only avenue for them to do so was to go through the black market, figure out how to contact an illegal arms dealer and likely travel to another part of town to make the deal, this obstacle, though it may sound small on paper, may well dissuade a person from going through with the plan.  This, of course, is exactly the reason why background check laws exist–putting an obstacle in the path of a person who wants a gun to commit a crime does not necessarily detour the purchase to an illegal source (though this does happen), it may stop the plan on the spot.  Psychologically, people are surprisingly set on the plans they make, and often have a hard time improvising when a plan goes awry.  This tends to be even more true of the mentally ill person, whose intense emotional state tends to lead toward a myopic view of his/her options.  This “tunnel vision” is a well-documented phenomenon; people under intense stress (mentally ill or not) tend to have a hard time looking outside of a single plan for a solution to their problems, and this myopia or rigidity increases in proportion to the severity of the mental illness.  Block a plan, and the person becomes stuck and often can’t think of another path.

I would be remiss if I did not remind the reader that while the typical mass shooter is mentally ill, the typical mentally ill person is not violent.  The vast majority of mentally ill people are more likely to be the victims of violence or to hurt themselves than to hurt other people, so we should not categorically treat the mentally ill like potential mass shooters. 

The bottom line is, making guns less accessible is likely to reduce the number of mass shootings, and just because a mentally person wants to kill a lot of people it does NOT mean they will find a way.  We can’t stop all mass murders even with stricter gun laws, but it is more likely that the number would be reduced than increase.  

A Patient-Centered Conundrum

Nowadays you hear a lot of about the “patient-centered care model” in medicine.  The Institute of Medicine defines it as “Providing care that is respectful of and responsive to individual patient preferences, needs, and values, and ensuring that patient values guide all clinical decisions.”  This model emphases educated decision-making guided by the physician, and is a shift away from the patriarchal “doctor knows best” model of traditional medicine.   It is the direction medicine is going.  Interestingly, it is the model that has been taught to psychologists for decades.  We have known for a long time that this basic model helps most patients (not all) feel empowered and often strengthens the doctor-patient relationship.  It is based on mutual respect and joint decision-making.  But you’ll notice that the definition assumes this relationship is between two people, when usually it is a “menage a trois,” where the third party is the insurance company (and it’s just as sexy as it sounds.)

The insurance company is the other decision-maker and often the most powerful voice in the room.  The insurance company often makes decisions about what care you should receive and whether your provider should be paid.  This relationship is not consistent with the patient-centered model.  Insurance companies don’t base their decision on your values or your wishes, there is often no discussion or dialogue between the insurance company and you or the professional.  They seem to make their decisions capriciously, motivated by a desire to delay payment as long as possible to increase profits, or judgments are based on biblical, opaque rulebooks that constantly change and that health professionals can’t possibly memorize.  You and your professional, be it a physician or psychologist, may have a great plan on which you both agree, but the insurance company may choose to nix it.  Bizarrely, this sometimes happens after the service was provided so the professional doesn’t get paid.

As if this wasn’t frustrating enough, this unwanted three-way also can put the professional in the awkward position of trying to be both patriarchal and patient-centered.  The enlightened health professional may have a patient-centered approach, and may ask you to help make decisions about your care (for example, how often you would like to meet for psychotherapy), yet the professional may then need to act like a patriarch to satisfy the insurance company.  And the insurer is the alpha-wolf, the final arbiter of what treatment should or shouldn’t be paid for.  The insurer doesn’t want to hear about the patient’s decision, or that the professional deferred to the values or wishes of the patient, the insurer wants to hear that it was the professional’s decision, based on clear medical (always medical, not psychological by the way) necessity.  In other words, the insurer expects the professional to be the sole expert and decision maker, and has no real interest in what the patient thinks he/she needs.  The reason for this is that the insurer’s job (aside from making massive profits) is ostensibly to cut costs.  Relationships with their customers based on mutual respect and dialogue make it harder for them to say “no, you don’t need that MRI.  There’s a perfectly good radiation-soaking CT machine right there.  No, not there, not where you are–across town–we won’t pay for that one.  And don’t even ask about arranging transportation from us, that’s not our problem.”

The patient-centered model is a great idea, and the Affordable Care Act certainly embraces it.  But there is still this glaring inconsistency where insurers are concerned.  If we’re really going to adopt this model, we have to go all the way.  Insurers need to be regulated such that they are required to strongly consider the wishes of their customers and respect the plans that patients have with their health professionals.  But, alas, we’re not going to see that change, because it might increase costs.  So what we’re left with is this contradictory situation where the practice of medicine and psychology follows one value system, but the payment for those services follows another.  The professional is given the added job of trying to negotiate these opposing value systems, and the patient suffers the disappointment that comes from being led to believe healthcare is patient-centered, when in fact it is not.  

Blood-Injection-Injury Phobia: More Than You Wanted To Know

Even if you’ve never heard of the diagnostic label, Blood-Injection-Injury Phobia (let’s call it BII for brevity’s sake,) chances are you’re still familiar with the condition.  It is relatively rare but still considered common by mental illness standards, occurring in approximately 3.5% of the population by most estimates, and it is characterized by an intense overreaction to seeing blood, injections, injuries, or even to the anticipation or imagining of blood, injections, or injuries.  This intense overreaction could involve anything from feelings of intense fear or disgust, increase heart rate, a drop in blood pressure, or fainting.  

This profile of symptoms is different from other phobias, and some have even argued that BII is not a true phobia in the classic sense. Classically, a phobia is an intense, irrational fear of something like airplanes, dogs, closed spaces, spiders; a person could develop a phobia to just about anything.  Physiologically, when a person experiences a phobia, there is an extreme activation of the sympathetic nervous system—that’s the branch of the nervous system involved in many different automatic reflexes designed to help you in times of danger, which is why it is commonly known as the “flight or flight” system.  Actually, it’s more complicated than that, the sympathetic nervous system kicks in for other reasons as well, but let’s keep it simple for now.   When the evolutionarily primitive sympathetic nervous system, regulated by the spine and brain stem, is dialed on “low,” it mainly just assists in keeping you alert and awake.  When it gets kicked up to “high,” like when you encounter something terrifying or surprising, you’ll notice a variety of physical changes like increase heart rate, increased respiration, muscle tension, sweatiness, feeling hot or cold, pupil dilation, blood vessel constriction, and slowing of the digestive system, just to name a few.  That’s what most people experience when they have a phobia.  But in the case of BII, it doesn’t work the same way.

When a person with BII experiences an onset of symptoms, the reaction is not merely an activation of the sympathetic nervous system, and subjective feelings of fear may not even be present, unlike with a classic phobia.  I can speak from personal experience as I have this condition; when I was very young, maybe 2nd or 3rd grade, my teacher showed the class a photo of a finger with a blackened fingernail from bruising (don’t ask me why), and the result was that within a few seconds I fell over from my chair to the concrete floor, unconscious.  It had never happened before.  I recall a mild feeling of disgust at the picture before passing out (“ewww that’s gross!” is what my friends told me I said right as I fainted), but I definitely did not feel afraid of the picture.  Throughout my life when I have encountered similar situations that made me feel woozy or even pass out, I have never felt intense fear as with a fear of snakes, for example.  I also do not have any classic phobias, no irrational fears, though research suggests people with BII are at higher risk for other phobias.  

Another way in which BII can differ from a classic phobia is that though there may be an initial activation of the sympathetic nervous system, causing a person’s heart rate to go up and other signs of sympathetic activation, what quickly follows is a rapid and dramatic activation of the parasympathetic nervous system.  Normally, when the parasympathetic nervous system is dialed to “low,” it supports physiological changes that are associated with calmness and relaxation.  When the parasympathetic system is dialed to “high,” as in the case of BII, the result can be that the person becomes dizzy, weak, or even lose consciousness.  The body goes into shock, just as if the person actually experienced a serious injury.  The normal purpose of this response is likely that it is protective to lower one’s blood pressure if one is wounded, because that should reduce bleeding and speed clotting.  And it is considered normal for people to experience a mild degree of discomfort or dizziness at the sight of blood.  But for people with BII, the primitive brain seems to make the mistake of interpreting the sight or anticipation of injury as an actual injury to the body, and then grossly overreacts.  In other words, BII seems to start as an extreme example of what happens normally to people, but then becomes a “phobia” because people become afraid of situations where they think they may lose consciousness.  

There appears to be two mechanisms at work with BII: a psychological and a physiological mechanism.  The physiological mechanism likely relates to a hypersensitive vagus nerve and/or other parts of the autonomic nervous system.  The vagus nerve is one of the 12 cranial nerves located along the brainstem.  It is responsible, in part, for regulating the parasympathetic nervous system. Stimulation of the nerve causes an increase in parasympathetic activity, such as a reduction in heart rate and blood pressure.  An interesting study made a case that most people with BII have an inherent difficulty regulating vasovagal activity that predisposes them to excessive drops in blood pressure in various situations, not just in relation to blood or injections, and so this predisposition just becomes very obvious around blood.  This would also help explain why BII is strongly heritable.

But there is also likely a psychological component to many circumstances where a person with BII experiences dizziness or fainting.  Certainly there is a psychological component in how a physiological predisposition to blood pressure dysregulation becomes a phobia: some people become frightened of situations where they might faint, and can then become highly avoidant of such situations or feel extreme levels of distress or fear in them.  But sometimes an episode of fainting is triggered by a psychological response to a stimulus, not by a primitive, reflexive vasovagal response.  You can listen to a very interesting example of this from the Radiolab podcast episode entitled “The Heartbeat”.  The original story was broadcast to a live audience, which is significant because the immediacy of the story likely contributed to the audience’s reaction. In the story, a woman describes how after heart surgery her heart pumped much louder than it ever had before.  So loud, in fact, that it was apparently audible from a couple of feet away!  The producers found various ways of demonstrating this in the show, including by using very loud sound effects of heartbeats in the theater, so that the audience could viscerally understand how loud, distracting, and even upsetting this poor woman’s heartbeat became after her surgery.  The podcast hosts explain that after the show, they were shocked to learn that several members of the audience experienced extreme reactions to the performance, including fainting and vomiting.  Why would people experience such reactions?  When most people think of BII, they typically expect a person to faint in response to the sight of blood, not the sound of a heartbeat.  In reality, whether the stimulus is visual or auditory is less important than how the person feels about it.  In this case, the audience members who were most affected may already have had BII and its accompanying, hypersensitive nervous system, but on top of that they were likely affected psychologically by aspects of the performance that increased their sense of empathy for the woman.  Empathy, or the ability to imagine how someone else feels and to even feel what they are feeling, is an invaluable tool that promotes the ability of humans to be social, helpful creatures, but in the case of BII it can actually be a problem.  The effectiveness of the storytelling, combined with the loud, inescapable sounds of a heartbeat pumping through the theater, combined with the message that the woman in the story found these sounds to be very distressing, were all intended by the producers to help the audience feel empathy for the woman; to feel what she felt.  This is what good storytelling is all about, of course, but because the subject related to the heart and blood, some people in the audience with BII likely felt empathy for the woman and this caused their brain to respond as though it was really happening to them. Consequently, this triggered in some people an activation of the sympathetic system followed in some cases by an excessive activation of the parasympathetic system.  Some people apparently threw up, other people passed out, not because they were necessarily afraid of what they were experiencing, but because they could so effectively empathize with it.  

The role of empathy in BII is what I personally find most fascinating in this disorder.  The brain is a complex series of feedback loops between different areas responsible for different functions.  Different parts of the brain “listen” to what other parts of saying, and respond accordingly. In the case of BII, more primitive parts of the brain seem to be listening to higher-level emotional areas associated with empathy and responding to these feelings no differently than if the person was actually injured or seeing blood.  

Some people may truly experience fear at the sight of blood, or needles, no different than if they saw a vicious dog or a cobra.  But for others, like myself, the problem relates more to our capacity for empathy.  Let me give another example from my own life: one day I was driving down the highway and listening to a radio show on the topic of stress, and a story was being told by a man who had fallen out of a motorboat and had his leg severed by the boat motor.  He was describing in detail how he felt and what he was thinking about as this happened to him.  In my own mind, not consciously, I was imagining the scene in my head and projecting myself into the place of the storyteller, and I noticed to my surprise and disappointment that I was starting to feel dizzy.  Recognizing what was happening to me, I turned off the radio, but it was too late: probably what happened next was that I continued to focus on my own feelings of dizziness, which likely increased the intensity of the parasympathetic response.  I pulled off the side of the road and stopped the car, but by that point my blood pressure was so low that my brain was not able to function well (neurons need oxygen from blood to function), and so, with my head in my lap I actually took my foot off the brake and thought I would take the car back onto the highway! This would have certainly resulted in a terrible car accident that I might not have survived, if it wasn’t for the fact that I could not steer with my head in my lap and while only semi-conscious, and wound up crashing my car into some trees at the bottom of an embankment. I wasn’t hurt, fortunately, but I tell this story now because it shows how serious a problem BII can be.  I was nearly killed.  I had never in my life experienced a similar episode of fainting while listening to something, and only a handful of times in my life had it ever happened at all.  I learned to be more mindful about what I listen to while I’m driving, but there are more lessons to be learned from my experience.  I will also try to share with you what I have learned over the years that has helped me to cope with my problem.  

First, it is important not to minimize the dangerousness of BII. Many health professionals know that some of their patients get woozy or faint when they receive an injection or get blood drawn, but they don’t often think to ask about in how many other kinds of situations this occurs.  They should ask, because the patient may not recognize they have BII.  A person with BII needs to be aware of what kinds of situations may trigger an episode of dizziness or fainting, because as in the story I just told if a person faints at the wrong time, the results can be deadly.

Next, once a person knows they have BII, they need help learning how to prevent losing consciousness, which is the most severe and dangerous symptom of the disorder.  Here are some tips:

  1. Stay rested, fed, and hydrated.  I have learned from experience that I am more susceptible to drops in blood pressure if I am tired or dehydrated.
  2. Get your blood pressure back up!  This is key! If possible, lay down on your back and raise your legs up just above the level of your heart (or higher), which will use gravity to increase the blood pressure in your head.  I find this extremely effective.  If, however, you cannot lay down, then try to raise your blood pressure by tensing up all the muscles in your body.  Phlebotomists (the nice people who take your blood at the physician’s office) don’t like this advice because they want you to relax—if you are too tense then your arteries constrict and it becomes harder for them to inject the needle.  But relaxing is the worst thing to do when your blood pressure drops.  So when getting blood drawn, laying down is your best option. Otherwise, do your best to get your head below the level of your heart and tense all your muscles as hard as you can.  Another option might be, if practical, to quickly try to exercise, for example run as fast as you can or do jumping jacks to boost your heart rate.  But be careful, if you feel yourself fainting then stop and lay down, you don’t want to pass out while running and hit the concrete at high speed!  Another potential preventive treatment, though I haven’t seen any research on this, could be to take medications that raise blood pressure prior to situations that are triggering.   
  3. Distract yourself.  Remember how empathy is related to BII: if you focus on yourself being injured (like watching a needle being injected into your arm) or feel strong empathy for someone else’s injury, you are more likely to have an episode.  So try to think about something else, strike up a conversation with another person, and get your brain to stop focusing on the topic of blood or injury. 
  4. Condition yourself.  When the popular TV show “ER” was on, I used to deliberately watch the parts with graphic images of injuries as a way of training my brain not to overreact at the sight of realistically portrayed gore.  This kind of treatment is technically called “graduated exposure.”  Now, that kind of conditioning won’t likely generalize to other situations: I learned to watch “ER” without worrying I might pass out, but I still have trouble getting blood drawn.  To address that, I would likely need to spend concerted time working with a phlebotomist, gradually exposing myself to needles and injections repeatedly while also practicing keeping my blood pressure up, and hopefully over time I would train my brain to not associate needles with fainting.  I haven’t yet committed the time to this project, but in theory it should work.   
  5. Watch your thoughts.  Whether you are empathizing with a person who is injured or you are terrified at the sight of blood, BII isn’t just about reflexes you have no control over, your thoughts are playing a role and contributing to the symptoms.  You may not control your cranial nerves, but you can control your thoughts.  Certain thoughts can contribute to the problem, like “this is awful!” and certain thoughts can help prevent a problem, like “I can handle this.”

Over the years I have found myself less and less susceptible to fainting, probably as a result of exposing myself to provocative situations and gradually conditioning my brain not to overreact.  But there may be limits to what I, or anyone else, can achieve. Because the likely root cause of BII relates to the autonomic nervous system, there may be a limit to how much we can “tweak” through behavioral interventions.  Maybe in the future we will have more sophisticated treatment options.

Another reason to suspect that BII cannot be “cured,” is that episodes of dizziness, fainting, vomiting, or severe distress related to blood and injury seem to be able to happen to anybody at any time.  While people with BII may experience a lifelong pattern of predictable episodes, other people may never have an episode until an unusual or surprising circumstance occurs.  For example, even though surgeons are generally comfortable seeing blood and injuries, anecdotally there is evidence that many have experienced occasions when they became physically ill or even fainted, and what seems to make the difference is if what they experienced was surprising, dramatic, and unexpected.  In other words, being able to anticipate blood and injury may prevent an excessive vasovagal response in most people.  Why would this be?  Anticipation may give us the time we need to suppress or inhibit our reflexive autonomic response to a frightening situation; to help calm ourselves down before we freak out.

So while there are many things you can do to help prevent fainting episodes, or syncope, to use the medical term, there many not be any way to guarantee 100% success.  But if you are like me and you live with BII, hopefully you’ll find something in this post useful for you in preventing future problems.  Remember, if you want to get really serious about exploring treatment options, including more advanced techniques like graduated exposure, or if you’re having trouble getting a handle on panicky reactions to situations, you may want to consult a psychologist to increase your chances for success.  Though research in this area is limited, the best treatment currently seems to be cognitive-behavioral therapy and graduated exposure.  

If I Can Make It, You Can, Too: The Logical Fallacy We So Want To Believe

 The “American Dream” is commonly understood to mean the ability of an individual in this country to achieve anything regardless of where you start, if you simply put in the effort.  I’m not sure this is, in reality, every American’s dream, but be that as it may the “American Dream” is considered by many to be inspirational.  The idea of overcoming obstacles and achieving great things is so psychologically appealing it is embraced by all political parties. Both Barack Obama and Bill O’Reilly frame their life stories in this narrative.  Not only politicians but successful business people, sports figures, film stars, and other professionals commonly take public pride in their ability to start with little and end up with a lot.  And it is common, particularly by politicians and business people, to use this as a selling point: if I can start from humble origins, overcoming enormous odds, then you can, too, and I want to help you do it.

Perhaps it is the recent story of the sad, short life of Freddie Gray from Baltimore that has me thinking more about this idea, this logical fallacy.  What is a logical fallacy?  In short, it is an argument that would appear to be true, but is not because the underlying reasoning is flawed.  Many have argued that the American Dream is not a possibility for millions of people, and I will not repeat those arguments here.  What I am very specifically trying to highlight is the logical fallacy behind the assertion, “If I can make it, you can, too.”  For while this assertion can be inspirational, it can also be judgmental, a phrase that can be used to condemn people for not being successful or overcoming their personal struggles because of some personal failing; maybe they are lazy, unmotivated, or underachieving.  And further, it may be argued that people who do not put forth the effort to be successful should not be helped or assisted in life.  This is where an argument meant to be inspirational becomes distorted and dangerous.  It can be used to argue that there should be no social safety net, no welfare program, no housing assistance, no free addiction treatment… no handouts to those who don’t try hard enough.  And can one know if somebody is indeed trying “hard enough?”  

Here is the fallacy behind the argument: if a person can overcome enormous obstacles to become successful in life, it is implied then that other people cannot always be expected to do the same.  If everyone could overcome all obstacles, then they wouldn’t be obstacles, would they?  Nobody considers the density of our atmosphere to be an obstacle to walking, because air is so flimsy to human beings it presents no meaningful challenge. An obstacle is only an obstacle if it is a challenge that may possibly not be overcome.  Overcoming the obstacles of life is only an impressive feat if not everyone can do it.  Therefore, the assertion, “If I can make it, you can, too” is undermined by its own logic. A more accurate statement would be, “If I can make it, maybe you can, too.  But maybe not.”  No, it’s not as catchy.  

I didn’t know Freddie Gray, but I know some of the obstacles he faced in life.  One obstacle was that he was poor.  Poverty sounds simple enough–not having enough money.  But with poverty comes such a range of obstacles it can be hard to wrap your head around.  For example, because Freddie was poor, his family couldn’t afford to live in good housing.  He grew up in an apartment, one so old and neglected that the lead-based paint had never been removed.  As a result, Freddie as a child had toxic levels of lead in his body.  The consequences of lead toxicity are well known, it leads to cognitive and emotional impairments that are irreversible. Those impairments then lead to additional obstacles, a vicious cycle from which it is extremely difficult to escape.

As a psychologist, I work with people dealing with big obstacles.  Those obstacles may be medical, psychological, or environmental.  Although psychologists are trained to believe that the ability to overcome obstacles largely comes from within, I also know that oftentimes the obstacles we face are simply too difficult to overcome without substantial help from others.  It is important that we never lose sight of that fact.  As tempting as it is to believe we can do anything we set our minds to, the reality is that this is often not possible.  And needing the help of others to get through life is not a weakness or a failure, it is through humans relying on other humans that we are drawn closer to each other and come to care for each other, which increases our own sense of wellbeing and psychological health.  

In Praise of Manipulation

When we hear the word “manipulative,” it is usually a pejorative word used in a context like “that person is manipulative, I don’t want to help them” or “stop being so manipulative!”  People misunderstand that the real problem is not that a person is manipulative, but rather that the person in question is terrible at being manipulative.

This seems counterintuitive, so let me explain.  First, let’s define our term.  The dictionary will tell you that to manipulate people is to influence them to do what you want them to do with indifference to how it affects them.  That doesn’t sound good, but the reality is that when most people are being “manipulative,” they are often not motivated by indifference but rather are trying to get their needs met and are unskilled in doing so in a socially appropriate way.  

Now, stop reading this for a second and think about what are your top five personality qualities, the things about who you are that you are most proud of. Done?  Is “good at manipulation” one of those top qualities?  I’m guessing the answer is no.  We are taught that to be manipulative is bad, and indeed the strict definition is not positive.  What qualities did you come up with?  You are…kind? trustworthy? generous? forgiving?  I would argue that to be any of these things is to be skilled at a kind of socially appropriate manipulation.  When I meet a client in my office for the first time, we usually exchange pleasantries, we smile at each other, say “hello,” maybe shake hands, and these behaviors are all small ways of attempting to influence each other.  From my end, these are ways of trying to show my client that I am friendly, respectful, and that I want them to like me. That’s good for business!  From the client’s end, they want to appear friendly and respectful so that I will want to help them with whatever their problem is.  We are each trying to influence and “manipulate” each other to meet our respective needs, but we don’t think of it in those terms, instead we just think, “I’m being friendly.”  And the other person doesn’t feel influenced or manipulated because, as socially skilled people, we both know how to influence other people with enough subtlety and respect for the other person’s feelings so that they don’t feel used or abused.  

When people are not good at being manipulative, their attempts to meet their needs often come across as selfish, inconsiderate, even hurtful.  This is most often true for children, teenagers, immature adults, and adults who have chronic mental health problems.  Take for example a teenager who ignores everyone in the family but isn’t shy about asking for spending money for the weekend.  This causes the parents to feel manipulated by the teenager, and their impulse is to say “no” to the demand.  The teenager’s need is not the problem, wanting spending money is perfectly appropriate, but it is the way in which the teenager tries to get that need met that is inappropriate.  We want to be shown respect and gratitude before being asked to sacrifice something, like money, because it makes us feel like our sacrifice is recognized and appreciated.  If the teenager in question would interact with the rest of the family, that shows respect and caring, which makes it okay to then ask for money.  The basic rule of healthy socialization is: “I’ll scratch your back if you scratch mine.”  When people want their own back scratched but don’t want to scratch our own back, it feels manipulative and we don’t want to help anymore.  Consequently, the other person doesn’t get a back scratch, or the teenager doesn’t get the spending money, or whatever, and the person’s needs go unmet.  This is why I say being manipulative is really a misnomer: unskilled manipulation often leads to the person being unable to get his or her needs met, so the attempt to manipulate is totally ineffective, and, thus, not very manipulative.  What do we call effective manipulation?  We call it: good social skills!

So, in the future, when you are dealing with a person and they are acting in a way that you feel is “manipulative” and that makes you feel like not wanting to help them, try to have a little compassion for that person.  Most likely, whatever the person wants is not the problem, it is the way they are going about trying to get it.  He or she is trying to be to influence, to manipulate, but failing miserably.  If appropriate, you may want to try to point this out to them, and explain to them why you are saying “no” and, most importantly, offer an example of what they could do so that you would say “yes.”  This is a way of teaching good social skills, or, in other words, how to be an effective manipulator.  

Dr. Ruth Roa-Navarrete

Dr. Joe Sesta

Older Posts

Custom Post Images