Jan 15

In the Company of Ideas: Four years of AHOAK


The following essay was started about three years ago and finally finished only recently. As you might notice, it is permeated with a certain feeling of anger regarding my chosen profession of applied psychology, a feeling that was originally sparked over the course of my university training (which is when I first sat down to write this essay). The reason for this feeling, which has mainly to do with what I perceive as rampant lack of skepticism within the field of psychology, is acknowledged and elaborated upon in the body of the text. Given I have had the opportunity to work, since graduating and obtaining my license to practice psychology, with colleagues who do not treat ideas in our field as hard-and-fast truths, but as objects open to discussion (which may or may not invalidate them), that feeling of anger has, in time, largely subsided. Although I finished writing this essay in another emotional state of mind than when I started writing it, I have chosen to leave intact whichever angry feelings seeped through my words when I first typed them—for the most part, to avoid invalidating my past self, but also to avoid invalidating my future self, which I am sure is not immune to a resurgence of frustration regarding his well-loved (another, complimentary feeling I hope comes through in my essay) profession.


A Heck of a Kerfuffle.com recently celebrated its fourth birthday. On January 14, 2015, to be precise. During our first four years of “operation,” more than 35 short- to full-length articles exploring a variety of subjects were published. Further, we played host to more than 2,000 views per year from online travelers. In honor of this milestone, I thought I would share a little bit about what I learned on my first literary venture into cyberspace.

I would have loved to tell you that this blog was born out of sugar cups and rainbows. Alas, its origins are a tad more somber. The idea to develop my own blog, you see, was actually the product of frustration. Two years into my doctoral program in clinical psychology, I was exasperated. I had begrudgingly realized that the mandatory courses I was made to take embodied, at best, a forum for only a limited number of ideas and narrow discussion. Further, it appeared that fellow students were all too content feeding on the particular brand of knowledge professors were choosing to dish up. Likewise, professors seemed to have done the same during their own education, since none of them could explain to me how they had come to their conclusions, as if conclusions did not flow from underlying premises, instead spontaneously materializing out of la vérité itself.

Intent on compensating by fashioning my own education, I started to read. A lot. Books on the history of psychology and psychological theories, to find out how we got to where we are and if we happened to miss or leave behind important insights into human behavior. Books on alternate kinds of psychotherapy, because the training offered to me was either too esoteric (i.e., psychodynamic therapy) or too rigid and presumptuous (i.e., traditional cognitive-behavioral therapy). And books putting into question the very state of clinical psychology, to highlight the limits of knowledge we have come to consider absolute. Before long, my brain predictably began to writhe with ideas. But what to do with all of them? Instead of keeping my thoughts all to myself, I decided I would start writing them down, maybe even organize them into cohesive essays and share them with others. And such is how A Heck of a Kerfuffle was conceived. (Because I am not only passionate about psychology, I decided to broaden the scope of my new blog to cover other areas of interest, such as gastronomy and cinema. For the purposes of this essay, however, I choose to focus only on the process of writing about ideas in psychology.)

Before I go into the essays themselves and what writing them taught me about myself, I would like to recount to you how they came to find a permanent home on the Internet in the first place. Those of you who know me know that I am not particularly proficient at computers, and so learning to build my very own website represented a daunting task. Upon consulting, I was instructed to: find a web host, purchase a domain name, download web software to create my website, and design it to my liking. While all this makes sense to me now, getting there took some work. Going through the process, I have come to understand it in the following, less technical terms: purchase yourself a plot of Internet land, name your new domain, have builders erect your house, and furnish it to your liking. And voilà! You have got yourself a brand new home in Cyber City for everyone around the globe to visit. While I’m at it, I would like to thank Web Hosting Hub for helping pop my digi-cherry, and for providing consistently stellar service!

As you may have gathered from reading my first essays on my experiences as a clinical psychologist in training, I am skeptically inclined. Because I have elected to become a “mental health specialist,” this questioning attitude primarily manifests itself around the subjects of human behavior (or how to best understand it) and the modification of human behavior (or how to best go about it). In fact, some of my favorite insights from the past five years of clinical training and practice as a psychologist concern these particular issues. While I would describe my relationship with my chosen profession as marked with caution, even downright suspicion, do not get me wrong: I love what I do. I just refuse to let that affection dull my critical faculties and intellectually blind me.

I should specify at this point I do not advocate fanatical skepticism, or disbelief for the sake of disbelief. On this point, I am in agreement with mathematician Henri Poincaré (1901), who noted: “To doubt everything or to believe everything are two equally convenient solutions; both dispense with the necessity of reflection” (p. xxii). That being said, I believe skepticism encourages reflection more so than other attitudes: in fact, thinking that something might likely not be true incites one to consider the evidence for why it might actually be true, more so, at least, than thinking that that something is very probably true in the first place. This, of course, presupposes that the skeptic is interested, to begin with, in knowing the truth, regardless of whether or not it accords with his own expectations. To reiterate, from an experiential standpoint, the sense that “This might not be true but I want to know if it is” provides, in its underlying tension, impetus toward reflection, more so than the “tension-less” sense that “This is probably true but I want to know if it is.” To be sure, some of us may adopt a skeptical stance because we do not want something to be true, in which case we might not be motivated to consider the evidence against it. This, however, is an example of skepticism misused, in that its purpose is to service our own biases, not the search for Truth. Thus, skepticism, to be epistemologically fruitful, must always be coupled with a desire to expose reality.

Writing in 1925, the father of psychoanalysis, Sigmund Freud noted: “When instructing our own disciples in the theory of psychoanalysis, we always observe how little impression we make on them in the beginning. They accept the analytical teachings with just as much equanimity as any other abstractions which have been fed to them. Some of them may have the earnest desire to be convinced, but there is no trace that they ever really are convinced” (p. 64). It appears times have changed. In contrast to Freud’s observations, I noticed, over the course of my training as a therapist, that many of my classmates wholeheartedly accepted all that modern clinical psychology is and has to offer: clinical psychology, they implicitly insisted, is an infallible science, so why think twice about it? It is, specifically, just like medicine, and so should command the same sort of deference. Psychologist Gary Greenberg observed a similar trend, only from a professor’s point-of-view, and identifies a “top-down” process of misguided attitude transmission: “[Most] students seem oblivious to the crucial epistemological problems that haunt their discipline. Their education continues to consist of largely technical training based on the assumption that they are “doing science”” (1997, p. 257).

I, for one, refuse to take part is such unrestrained naïveté. I understand we want to feel like what we do is important—and it can sometimes be—but I cannot merrily ride along the medical bandwagon. However much we may couch clinical psychology in the language of science, clinical psychology is just not medicine. Namely, what we refer to as “mental illness” has very little to do with actual, physical illness. Likewise, what we refer to as “psychological treatment” has very little to do with actual, medical intervention. Indeed, in both instances, any similarities can best be described as specious. As such, the usage of medical terms like “illness” or “treatment” to describe the problematic behaviors we seek to change in our clients, and the conversations we use to help our clients change, is simply misleading. As critics have leveled against us for decades, our brand of science is decidedly “soft,” whereas medicine’s is anything but.

Psychologist Jeffrey Kottler (2010) tells of “writers who believe that therapy, as a profession, could quite legitimately be housed in an academy of dramatic arts instead of a school of education, health, social work, medicine, or liberal arts. In this setting, therapists would speak of their craft as professional conversation, strategic rhetoric, or even a genre of interactional theater” (p. 297). I am inclined to agree: psychological treatment is actually reflective, didactic conversation about mainly moral issues, and thus very unlike actual, medical treatment (to find out more on the difference between mental and physical cures, see Szasz, 1987). On the subject of psychotherapy, the philosopher Jean Paul Sartre even went so far as to claim that “[there] is philosophy, but there is no psychology. Psychology does not exist; either it is idle talk or it is an effort to establish what man is, starting from philosophical notions” (cited in Rybalka, 2002, p. 245).

I understand it is hard to enter a helping relationship when so much of what we do is so, for lack of a better word, fuzzy. But I believe it is imperative that we acknowledge this in order to do good work; otherwise, we are just being pretentious, when humility—my own values tell me—should drive us as responsible and effective vocational helpers. Renowned therapists themselves identify lack of humility as not only inimical to successful therapy, but also responsible for their worst therapeutic failures (Kottler & Carlson, 2002). Indeed, thinking of ourselves as purveyors of steadfast truths about life and how to best live it can have catastrophic circumstances: for instance, when we deny, ignore, or invalidate our clients’ own truths, we risk threatening the integrity of their very selves (Rowe, 1994). Thus, because so much of what we do is so (let us settle on) nebulous, I believe it is imperative that we continually, actively reflect on our profession’s professed tenets, never blindly following them, while always keeping in mind the limitations of those tenets we have settled on and chosen to abide by.

In the introduction to his no-hold-barred book on the evils of psychotherapy, aptly titled Against Therapy, ex-psychoanalyst Jeffrey Masson (1994) specifies: “The fact that some psychotherapists are decent, warm, compassionate human beings, who sometimes help the people who come to them, does not shelter the profession itself or the practice of that profession from the criticism I make in this book. It only means that they function in this manner in spite of being psychotherapists, and not because of it” (p. 41). While I do not share many of Masson’s conclusions regarding the ethics of psychotherapy, I do agree with him on one point: that those psychologists best suited to help clients are probably those who do not take their profession—in its current state, at least—all that seriously. Some authors (e.g., Engelhardt, 2004) have even gone so far as to suggest that psychologists should, for their clients’ sake and benefit, pretend as if what they know to be true about human behavior (and, more precisely, misbehavior) is not true at all!

While I do not believe that clinical psychology ultimately amounts to a science (although particular helping strategies can certainly be studied scientifically), but more of a philosophically flavored art, many of my colleagues would respectfully disagree. However, even if we were to agree to call what we do science, the philosophy underlying what we do could still not be denied. As philosopher Daniel Dennett (1995) reminds (actual) men and women of science, “[scientists] sometimes deceive themselves into thinking that philosophical ideas are only, at best, decorations or parasitic commentaries on the hard, objective triumphs of science, and that they themselves are immune to the confusions that philosophers devote their lives to dissolving. But there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination” (p. 21; emphasis added). And so, regardless of whether we believe we are “mental health scientists” or simply life coaches, the philosophy of psychology cannot be denied. In fact, it is essential it be ascertained.

But, why bother thinking about the philosophy underlying clinical psychology (or any other human endeavor, for that matter)? Since I could not possibly say it any better if I tried even really hard, I call upon philosopher Bertrand Russell to tell us why:

Because “[the] man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the cooperation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find […] that even the most everyday things lead to problems to which only very incomplete answers can be given. Philosophy, though unable to tell us with certainty what is the true answer to the doubts which it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never travelled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.” (1912, p. 91)

Psychotherapy is one such “common object,” which, once prodded with the rod of philosophy, quickly reveals itself (for many, at least) to be something somewhat different from what it is normally made out to be. I will later discuss how the common object of “mental illness” meets a similar fate as psychotherapy when handled using philosophy.

The psychoanalytically minded among you will no doubt attempt to discount my cantankerous attitude toward clinical psychology by postulating potential biographical causes that may have brought it about in the first place. Perhaps I have time and again been hurt and disappointed by objects I thought I could count on, I can no longer trust those that presently inspire affection in me. Even if that interpretation were true, the “etiology” of a belief system, as psychologist William James (1901) takes great care to explain in his seminal qualitative study of the religious experience, has absolutely no bearing on its accuracy. After all, what belief claims no psycho-historical causes or reasons whatsoever as precursors (beyond a probable epistemic desire “to know,” that is)? For instance, some historians have, according to astronomer Carl Sagan, said of Isaac Newton that he “rejected the philosophical position of Descartes because it might challenge conventional religion and lead to social chaos and atheism” (1996, p. 258). But how “Newton was buffeted by intellectual currents of his time […] has little bearing on the truth of his propositions.” Sagan goes on to describe a similar attempt to discredit aimed at Charles Darwin, one that commits the added mistake of confusing cause and effect. Having said all of this, there is some validity in the hypothesis that my suspicions are at least partly grounded in my past. As a child, I was asked to assume the truth behind countless religious postulates regarding how the world works and how to best direct human behavior; when I started to think, I noticed none of them—to my satisfaction, at least—did the explanatory or regulatory job they were specifically fashioned to do. This left a sour taste in my mouth: I felt misled by authority figures I trusted.

It is then that I learned the power of independent, critical thinking in shaping my own understanding of the world. (I appreciate the value of suspending thought and just having faith, but I cannot endorse faith as a standalone life philosophy. Since any belief can be accepted as true based on faith alone, all faith-based beliefs are consequently equally true. In other words, faith alone cannot make the probably true rise above the probably not true. You need reason for that.) Historically, no knowledge (except when resting on faith) has ever deserved the qualifier of “ultimate.” After all, knowledge builds upon itself away from non-adequate accounts, arguably ad infinitum (an observation made by the philosopher of science Thomas Kuhn, in his seminal book on the titular structure of scientific revolutions.). In fact, during my very first year of university, my History of Psychology professor likened the idea of “facts” to a modern fiction, in so far as facts are final and should not technically evolve. And so, I do not see the value in assuming that what I hold to be true now will necessarily be true 50 or 100 years from now.

Some of you may be thinking at this point: “Why believe in anything if it is just a stopover on our way to Truth?” First of all, what we currently hold to be true may not necessarily be a stopover at all, but actually the final destination. However, only time will be able to tell us this (not in a decisive sense, to be sure, but in a probabilistic sense, i.e., in the progressive accumulation of evidential weight). In fact, all knowledge, be it in Science (or Philosophy), is tentative to some degree, being only one body of counter-evidence (or counter-arguments) away from modification or downright withdrawal. Second, there are presumably units of knowledge that stand closer to the final destination than others, or at least stand a better chance of leading us toward it as opposed to some epistemological dead-end. Thus, I am not advocating that we not hold on to present-day knowledge. After all, without intellectual markers to situate us, we would flounder about the world directionless. What I mean to say is: we should simply hold on to this knowledge with a loose grip (a move which can also make it easier to reach for the next epistemic monkey bar). (For simplicity’s sake, I will continue to use the terms “fact” and “truth” [and all variants thereof], but in the tentative, “as can presently best be understood” sense of the words. With one exception: when “truth” is capitalized, it is meant to evoke ultimate Truth, that which scientific revolutions rotate toward.)

Chapter I – In the Company of Ideas

In writing about ideas, I have continued to learn more about my receptivity toward them. As briefly mentioned above, however, my reluctance to accept everything I am told as automatically true is a part of me I have already been aware of for some time now. What has been more of a revelation is how I as an individual engage with immaterial (but still very real) perceptions like ideas. I have to admit: relationships with ideas can be tricky. You may fall immediately in love with one or it may grow on you in time. Once you have selected a suitable idea for possible “appropriation,” you give yourself to it completely: it is yours, you are its. Eventually, you may reluctantly come to think you can somehow make it better, only to find out it stubbornly resists change. Frustrated, you may begin to seek out or simply let yourself be seduced by a more attractive idea. However heart-breaking the thought, you may decide to ditch your previous idea, because—as you repeatedly tell yourself as if to assuage some kind of doubt—“This new one is it.” Come to think of it, idea selection bears a striking resemblance to mate selection!

For this reason, I have learned to be more cautious when considering adopting ideas. That is not to say I have not also had to learn how to be more tolerant of uncertainty when ideas are being considered (or dated, if you will). Greenberg (2010) counsels: “[When] it comes to important and complex questions, the best approach is to leave yourself in doubt for as long as possible, to live with inner conflict rather than to end it, to withstand yourself rather than to become someone different, to understand you arrived at an important juncture rather than strike out down a road simply for the sake of getting on with life” (p. 7). Adopting a hopeful view of this emotionally arduous fact-finding process, physicists Laurence Krauss even predicts “[lack] of comfort means we are on the threshold of new insights” (2012, p. xv). And so, I never abandon an ostensibly sound idea that has managed past my skeptical defenses without giving my relationship with it every chance it deserves, however trying what lies ahead. I remain attuned to counter-arguments opposed to my idea. I evaluate their cogency. I revise my own arguments in support of my idea to address the counter-arguments. If an idea repeatedly fails to stand its own during challenges or keeps resisting improvement, I (I must admit, halfheartedly) abandon it, perhaps even for the very idea that brought about its downfall.

An example: my earliest memory of me participating in a college class—Introduction to Sociology—sees me arguing against the idea of ethnocentrism by appealing to a universal moral standard. However much I enjoyed the thought of absolute moral judgments, I eventually came to question their existence. (While it may sound contradictory, I do not, in any way, advocate normative moral relativism, whereby any behavior should be tolerated simply because there exist no objective behavioral standards in Nature.)

While learning to live and getting acquainted with ideas, I have also learned a great many things about them. First off, just because an idea sounds counterintuitive does not mean that it is somehow going against Truth. On the subject of moral behavior, Russell once commented: “[Conscience] is a most fallacious guide, since it consists of vague reminiscences of precepts heard in early youth, so that it is never wiser than its possessor’s nurse or mother” (1901, p. 74). Likewise, intuition is no more informative in guiding beliefs. Because intuition rests on knowledge of what Truth should be, as perhaps outlined by prior education, it does not necessarily orient us toward that which is definitely True. Experience is, in turn, no more reliable a guide. Speaking of clinical experience, self-avowed Freud basher Frederick Crews (1995) concludes: “Standing alone, [it] is not a probative tool but an inducement to complacency and tunnel vision” (p. 7). Moreover, I have learned that while I will not assume the truth of the status quo, I will always remain open to it being true (or at the very least a right step toward True).

However demanding thinking about ideas may be, writing about them poses different sets of challenges. Whereas there is no end to thinking (you can keep doing it for as long as you want or possibly can), there is one to writing (provided you want to share your writings at some point). Since I rarely write about ideas I am completely done thinking about, this final quality to writing can be problematic. Further, the very attempt to translate shapeless ideas into definite symbols may change one’s understanding of them. Screenwriter Charlie Kauffman, in a 2008 interview for the Writers Guild of America, explains: “Part of the thing that happens when you’re writing, especially when you’re writing one piece over an extended period of time, is that you have an evolving understanding of the world and an evolving understanding of the piece. And so, if you’re trying to be truthful, you start out with one idea, and as you become more familiar with it, or explore different aspects of the idea, different things become revealed to you, and you have to incorporate that. That becomes a bit of a hindrance when you’re writing, but I guess that’s the way I like to write.” And that is the way I like it too.

Chapter II – In the Company of Ideas that are (Probably) True

I have thus far discussed Truth with no mention of its nature, and so will say a little bit about it now. Jeff Winger, the self-assured protagonist from the television series Community, puts it this way: “The biggest truths aren’t original. The truth is ketchup. It’s Jim Belushi. Its job isn’t to blow our minds. It’s to be within reach” (2010, E14/S1). I agree with Winger’s first statement: many observations are self-evident (i.e., very unlikely of ever being discounted by any new evidence), truisms that barely need stating. That being said, I disagree with his subsequent statement: truths are not always necessarily easily discernable. As Algernon replies to Jack in The Importance of Being Earnest, “[the] truth is rarely pure and never simple” (Wilde, 1899/1990, p. 6). Presuming that Truth exists independently of human perception, it may then not always be easily apprehensible by the mind. Because Truth does not exist for us, its job cannot possibly be to exist in such a way as to always be ascertainable, as if tailored for our intellect. In fact, Truth holds no purpose; it just is, and what it is cannot, unfortunately, always be within reach. Discerning Truth, instead, oftentimes requires painstaking effort. It is us that must adapt ourselves to it, not vice-versa. As Krauss remarks, it is sometimes necessary that “we expand our horizons because nature is more imaginative than we are” (2012, p. 77). Miss Giddens, in The Innocents, is even warned by her employer that the “truth is seldom understood by any but imaginative persons” (Clayton, 1961).

Presuming there are things to know about our universe, what are the best ways to discern them? I have talked about how I prefer not to rely on potential indicators like authority, intuition or experience. Other classic aids are Science, Reason, and Faith. (I make a distinction between Science and Reason for reasons that will become clear shortly.)

As many of you already know, Science only concerns itself with that which can be falsified (i.e., determined to be untrue via observation or experimental testing). For example, Science can readily assess the statement “tortoises are faster than hares,” because its opposite can easily be measured. In this way, the investigative scope of Science is fairly limited. Although Reason is inherently part of the scientific process, Reason can also be used on its own to assess statements that cannot be falsified. For example, although the statement “Life is actually a dream” cannot possibly be proven to be untrue, Reason can show it to be very improbable (for a compelling argument, see Russell, 1912). (The latter statement is presumably either True or False; it is just that we cannot “know” the answer scientifically, but merely approximate it logically.) In fact, Reason is the primary tool of Philosophy, and so is used to answer questions stemming from every one of its branches, questions that Reason via Science oftentimes cannot touch (or touch as persuasively) because of its strict falsifiability requirement (see Klemke, Kline, & Hollinger, 1994 for more on the difference between questions “fit for Science” and questions “fit for Philosophy”). For example, while Science can effectively judge secondary religious beliefs (e.g., “The Earth is 6,000 years old”), only Philosophy can unrestrictedly tackle the primary belief in a deity itself (Piggliucci, 2009). In this way, the investigative scope of Reason is fairly large, if not unlimited. Like Reason, Faith can technically assess any variety of statements, falsifiable or not. For example, one could take it on Faith that tortoises are faster (or slower) than hares, or that life is actually (or not actually) a dream. In this way, the investigative scope of Faith is as large and potentially unlimited as that of Reason. (One might even say that Faith is superior to Reason when trying to find out what is true: whereas Reason can only show an un-falsifiable statement to be probably true or false, Faith can claim it to be conclusively True or False. Unfortunately, as we shall see, Faith can also show it to be both True and False, which most of us do not consider a helpful conclusion, or a conclusion at all…)

While both Reason and Faith boast equally impressive scopes of enquiry, they are by no means equal aids when it comes to actually making out Truth. Reason (whether applied within the realm of Science or Philosophy) remains most helpful because it can show certain statements to be very probably false or truer than others. In other words, it may organize statements alongside a continuum of Truth. Faith, on the other hand, can accept anything as definitively True. For example, one person could take it on faith that “God created the universe,” and another could take it on faith that “A giant, impossibly pink and fluffy bunny-rabbit created the universe.” From a Faith-based perspective, both people would be right, which is unlikely given both aforementioned beliefs cannot be true at the same time (unless, I suppose, God is an enormous, colorfully furred rodent). Statistically speaking, then, Faith is too liberal; in other words, it is associated with too great a risk of false positives. To be fair, some claim that Faith’s purpose is not to know Truth. Nevertheless, those who adopt beliefs based on Faith assume it has oriented them toward an accurate belief. Funnily enough, even people who insist Faith is sufficient when selecting beliefs (e.g., “God created the universe”) will admit that some faith-based beliefs (e.g., “A giant, impossibly pink and fluffy bunny-rabbit created the universe”) just do not make sense. Thus, while they ultimately do value Reason, they just do not think Reason is necessary to support their own beliefs. That is why the smarter among those who initially adopt a belief based on Faith alone ultimately succumb and resort to Reason to validate these. Take, for example, the numerous logical arguments (e.g., Rachels, 2002) for the existence of God, usually taken on Faith alone. The battle over whether these arguments are cogent is here waged in the realm of Reason, because Faith is, as we have seen, always insufficient, and Science is, in this particular case, out of its element (“God exists” is a non-falsifiable statement). (Note, however, that if one appends “and can interact with the physical world” to “God exists,” the latter statement suddenly becomes falsifiable, and, thus, amenable to scientific enquiry; see Stenger, 2009.)

For this very reason are common religious arguments against the trustworthiness of Science—postulating, say, that “Science can be wrong”—embody not a criticism at all, but merely a restatement of its strength, of the reason why it can be so useful. Because Science can reject hypotheses as being inadequate, but never accept any of them as definitively true, Science naturally promotes progress and movement toward Truth. Reason outside of Science, as in Philosophy, can also evolve by way of argumentation. Faith, on the other hand, is inert, deprived of any inbuilt mechanism allowing it to advance away from Error toward Knowledge. To illustrate my point, compare the number of times Science has revised its understanding of nature in the last few centuries to the number of times Religion has revised its understanding of nature in the last two millennia. Close to 2,000 years after the birth of their religion, Christians have only recently begun to seriously consider the possibility that hell, a major element of their belief system, does not exist (Bell, 2011). Science, on the other hand, has not only questioned but also revised its conceptualization of light, one of its own conceptual obsessions, at least three times in the last 350 years: first came particle theory, then wave theory, followed by wave-particle duality (Hawking & Mlodinow, 2010). (Although one can certainly substitute, without resorting to Reason, one faith-based belief with another faith-based belief, it remains impossible to tell, based on Faith alone, whether the new belief is any truer or “falser” than the old one. In other words, trying to understand the world using Faith alone is akin to running a marathon on a treadmill. Whatever you do, you are never behind, never ahead; or at the very least, there is no possible way to know.)

As you may have guessed, I favor Reason (in the form of Science or Philosophy) when attempting to understand the world around me. I will now discuss some of the products of my reasoning over the last few years, in regards, specifically, to psychology.

Chapter III – In the Company of Ideas in Psychology (Part I)

We saw earlier that “etiology” can never determine the accuracy of a belief, because every belief has causes. I argue that in some cases, neither do its consequences. (Note that James, an ardent pragmatist, would have disagreed with this.) When determining whether a belief is true or not, I believe the effects of maintaining that belief have absolutely no bearing on its truth-value, since some truths presumably exist independently of the effects of believing in them. For example, it is not unreasonable to assume that dogs probably exist regardless of whether believing in dogs is helpful or harmful to humans. Now, that may sound silly, but many people express beliefs that, when translated using canines, sound a little bit like this: dogs must exist because dogs make humans less lonely. For example, some defend the existence of God by claiming that without belief in Him, society as we know it would crumble into chaos. But the effect of not believing in God has absolutely no bearing on whether He actually exists. Such people are confounding two debates: the existence of God, and the effects of believing in someone like Him. In short, there is a difference between the veracity and the utility of an idea, two characteristics that are often confused when attempting to demonstrate the former.

Many psychologists commit such a logical mistake when defending their own ideological beliefs (namely, their preferred therapeutic approach). To be precise, they commonly interpret the proven efficacy of a given therapeutic technique as indicating the truth of its underlying premises and postulated entities. If their brand of therapy happens to have more positive outcomes then other brands of therapy, then that must mean their approach is based on fact, and that they are justified in using it when helping clients. In doing so, many therapists “draw hasty conclusions between symptom abatement and interpretation” (Crews, 1995, p. 117). As we have seen, that habit is misguided.

Other psychologists commit a similar mistake when interpreting the finding that all therapeutic approaches are actually equally effective in relieving life difficulties (dubbed the Dodo-Bird Verdict), and that therapeutic approach actually plays only a small role in achieving this outcome, as indicating that all approaches are equally (or unequally) valid. In other words, psychotherapy is really a free for all: simply pick the one you happen to like best or borrow from here and there. To be sure, the Dodo-Bird Verdict has been the mark of much debate, but let us assume for a moment that this is in fact true, that all forms of therapy are equally effective in relieving life difficulties. Does that mean that we, as professional helpers, are warranted in using or sampling from any one of them to help our clients? Provided one values Truth over Deception, not at all. The Dodo Bird Verdict only extends to therapeutic outcome (or value), not therapeutic veracity. (I regret to inform those of you who advocate therapeutic eclecticism so as to avoid thinking about the philosophy underlying each and all therapeutic approaches, that the best eclectic therapists will only borrow from approaches that share the same core philosophical assumptions, but who suggest different strategies based on these [Neimeyer, 1995].)

As I alluded to earlier, an idea should ideally be evaluated via itself, and not via ourselves; otherwise, we are simply evaluating the effect of believing in this idea, as opposed to the idea itself. That is what the Dodo Bird Verdict amounts to: a conclusion as to the effect of an idea, as embodied by a particular approach, not a conclusion as to the legitimacy of this idea, or the approach itself. Even Wampold and his colleagues (1997), who assessed and confirmed the Dodo Bird Verdict, take care to mention in the title to their article that “all must have prizes” only “empirically.” (By empirically, I interpret the authors as meaning, “as far as observable effects are concerned.”) That is, from a wider truth-seeking perspective, all do not necessarily get to take those prizes home. Thus, the actual soundness of a given therapeutic technique cannot be determined via its efficacy, but via the accuracy of its premises and the entities that those premises engage.

Now, whether Efficacy should trump Truth in therapeutic settings is for you to decide. To be sure, therapy is meant to be helpful, but it is also expected to be truthful (i.e., based on the best knowledge available). To help me demonstrate, consider the following: if a given lie-spewing cult helps make people happy, does that make it a reasonable way of helping people? If you have answered in the negative, then you have no business conducting whichever brand of therapy you favor based solely on the fact that it has been proven to or may possibly be effective. You are only justified in conducting it if you have critically assessed the theory behind your approach to see if it stands up to Reason. For example, if you are a psychodynamic therapist, you must believe in and be able to defend the statement: “nothing in reality is ever what it seems.” (You would think that, as a skeptic, I would admire psychodynamic theory; unfortunately, such theory advocates a fanatical sort of skepticism, where no clear paths toward reliable insights are laid out except those paved by “experts.”) Further, one should be able to explain why, despite the fact that no “distinctly psychoanalytic notion has received independent experimental or epidemiological support—not repression, not the Oedipal or castration complex, not the theory of compromise formation, nor any other concept or hypothesis” (Crews, 1995, p. 298), one is still justified in speaking of these as if they were real. Cognitive psychotherapy may enjoy a better reputation nowadays and put forth and into play self-evident truths like the existence of thoughts, but that by no means exempts it from critical philosophical consideration. Likewise, if you are a second-wave cognitive-behavioral (CB) therapist, you must believe in and be able to defend the statement: “reality exists and has been decisively and irrevocably quantified.” After all, without a discernable (and already discerned) reality, there can be no such things as cognitive distortions of said-reality, and there remains nothing with which to realign a client’s mistaken subjectivity. (Third-wave CB therapists overcome this philosophical hurdle by embracing human subjectivity, even its unpleasant manifestations, without seeking to modify it.) If you cannot defend either of these arguments via non-fallacious means, yet are still conducting either brand of therapy, then fortunately for you: you have some thinking to do!

Arguing against a pragmatic view of religion, Russell (1901) confessed: “I can respect the men who argue that religion is true and therefore ought to be believed, but I can only feel profound moral reprobation for those who say that religion ought to be believed because it is useful, and that to ask whether it is true is a waste of time” (p. 197). Likewise, I can respect the men (and women) who argue that their brand of psychotherapy rests on sound philosophical premises and therefore ought to be practiced, but I can only feel profound moral reprobation for those who say that their brand of psychotherapy ought to be administered simply because it is useful.

Chapter IV – In the Company of Ideas in Psychology (Part II)

Another idea I have become quite infatuated with is the idea that mental illness does not exist. I am not going to concern myself here with the reasons why I believe this idea to be cogent, but with a common mistake people make when trying to prove me wrong. People often tell me mental illness must exist since diagnoses give people comfort. That is yet another example of mistaking value for veracity. I am not concerned with the effects of believing in mental illness, but whether it actually exists or not. Because I have concluded, by weighing the arguments I have come across until now, that it does not, I believe it would be irresponsible—not to mention disingenuous—for me to pretend as if it does, simply to assuage my clients’ distress. (Should you be curious, the effects of labeling are both positive [e.g., Angermeyer & Matschinger, 2005; Deacon & Baird, 2009; Hayne, 2003; Laegsgaard, 2010; Murrie, 2005; Murrie et al., 2007; Wright et al., 2007] and negative [e.g., Angermeyer & Matschinger, 2003, 2005; Deacon & Baird, 2009; Hayne, 2003; Kleim et al., 2008; Lloyd et al., 2010; Schomerus et al., 2010].)

Let us say I were to conclude that mental illness does, in fact, exist. Should, then, I resort to psychiatric diagnoses when explaining my clients’ experiences to them? I should think so. Unlike me, however, many of my clinical colleagues have actually come to the conclusion (or simply accept) that mental illness is real, yet somehow still debate whether to communicate their diagnoses to their clients. That is, to put it bluntly, pure hypocrisy. (A laughable type of hypocrisy, since many of those same people will tell me I am being irresponsible for not telling my clients that they have a given mental illness, an illness that I, unlike them, do not even believe in!) Currently, the debate surrounding diagnostic labels concerns whether we should use them, not whether they embody something real. Under the pretense that diagnoses of mental illness may cause stigma, some clinicians decide not to report them to patients, opting for less reductionist, more humanistic terms instead. This anxiety surrounding labels has led to many psychologists becoming two-faced, speaking the language of humanism with clients, while thinking about and discussing clients with their colleagues using the language of psychiatric reductionism.

This is all plain silly. If mental illness exists, it follows that patients should always be diagnosed with whichever illness they appear to suffer from! If mental illness is just like any other illness, then it does not matter in the least bit whether learning that one is mentally ill will impose emotional hardships. After all, have you ever heard of a doctor debate whether he should label his patient as having HIV, because he may be discriminated against on account of his infection? Of course not! Diagnoses of physical illness may be hard to take, but we still give them, because they accurately represent what is happening to a patient. And so, if psychiatric diagnoses are the same as medical diagnoses, it follows we should always give them to patients, even if learning that one is mentally ill will hurt.

Chapter V – In the Company of Ideas in Psychology (Part III)

I have argued that psychotherapy should be based on fact. Yet, at the same time, I implied earlier that psychotherapy is a predominantly value-based endeavor (compared, that is, to medical intervention, which, to be sure, also involves value-based decisions, but not to the central extent found in psychotherapy) that rallies rhetorical, relational, and experiential processes (as opposed to medicine’s use of basic speech and physical instruments*) in service of its moral aims, which center around existential-humanistic matters like “what people do” and “what people ought to do” (as opposed to physicalist matters like “what the body has,” in the case of medicine). Values, of course, are not scientific entities, nature being morally uniform. This begs the question: if we are willing to tolerate the application of values in psychotherapy, why not also enlist other fictional constructions, such as psychodynamic or psychiatric entities like the unconscious or mental illness? More succinctly, can psychotherapy ever truly be based only on fact?

The goal of psychotherapy is typically to increase psychological wellbeing. (I use the term “wellbeing” here instead of “health,” so as to avoid any unnecessary confusion between the two, broad concepts of “optimal behavior” and “optimal body.”) This goal involves the following value-statement: wellbeing is more desirable than its opposite. This statement, however, is not grounded in science and thus should not be considered formal fact. As evinced by the existence of natural disasters and the mere potential for violent behavior, nature does not always have our best interest at heart: it could not care less whether we survive and thrive within its confines, or simply suffer our way through life, only to die a meaningless death in the end. Values come into play not only in regards to the goal of psychotherapy, but also in regards to the pursuit of that goal. A correctional psychologist attempting to rehabilitate a violent offender who is quick to anger, for example, might encourage him to learn to cope with his anger without resorting to violence, because doing so will help him lead a more satisfying life, by, say, not scaring away potential resources. But nature, again, does not care whether we better ourselves or not, whether we behave in such a way as to foster or undermine our wellbeing.

The decision, in therapy, that we ought to behave in certain ways, and that we ought to replace certain behaviors with others, will forever be determined by fictional entities (i.e., values). That is, and will forever remain, the nature of therapy. We cannot do anything about that. That being said, it remains possible to favor certain behaviors over others, and to replace those behaviors we do not favor with behaviors we do consider favorable, in ways that are based in objective reality, i.e., in observable phenomena. (Indeed, although “normative judgments cannot properly be regarded as either true or false [,] accepting or rejecting an evaluative judgment [can] depend on judgments that are themselves straightforwardly nonnormative”; Frankfurt, 2006, p. 28-29, italics in original.)

For example, if we wish to help a client increase his wellbeing, we can use observation to tell us which behaviors generally increase wellbeing, and do so consistently, lastingly, and with the least number of harmful consequences. We can also rely on observation to determine the best strategies to use when replacing behaviors that decrease wellbeing with others that increase it. Note here that nowhere in nature is it prescribed that life should be pleasant, but science can still be helpful in telling us how to best accomplish this goal. (On a related note: although the selection of adaptive behaviors, and the elimination of maladaptive in favor of adaptive behaviors, can be based on scientific inquiry, the recommendation that clients live and that therapists practice according to empirically derived knowledge will always be value-based. In the words of philosopher David Hume, 1739, just because a behavior is related to increased wellbeing, or just because a therapeutic strategy is helpful in replacing behaviors that decrease wellbeing with behaviors that increase wellbeing, does not mean that it ought to be enacted.)

Thus, we would be warranted in urging a habitual substance abuser to consider substituting his behavior with another, more happiness-friendly behavior, not because this is what nature intends for that individual to do, but because objective observation tells us that using certain quantities of drugs and alcohol, while creating legitimately pleasant states of mind in an of themselves, increases wellbeing with only relative efficacy (pleasant emotions may be interrupted by unpleasant physical symptoms, or other sources of pleasure may be compromised, like one’s profession), reliability (unpleasant emotional experiences may sometimes, inadvertently and unexpectedly, become more salient), durability (pleasant emotions are fleeting), and sustainability (pleasant emotions may become hollow or require more intense consumption to come about at the same level of potency). Using certain quantities of drugs and alcohol also entails potentially detrimental consequences to others (like family and friends). Moreover, we would be warranted in suggesting to the habitual substance abuser certain change strategies over others, because observation tells us that some strategies neutralize cravings better than others.

Further, misbehaviors targeted for substitution in psychotherapy can always be described and explained in factual, or at very least parsimonious and transparent, ways. Thus we would be mistaken to describe habitual substance misuse as an id-motivated regression to the oral stage of psychosexual development, or as a chronic mental illness, because these concepts, while accurately reflecting the existence of particular behaviors, rely heavily on allegory (which often entails the creation of extraneous entities) to describe and explain these behaviors, and also fail to wear their moral loading on their sleeve (unlike, say, the expression “life difficulty,” which openly acknowledges its value-based underpinnings).


I have discussed en long et en large what I have learned about myself over the past fours years contributing to A Heck of a Kerfuffle.com, while focusing specifically on the topic of ideas in the field of psychology. I have, however, purposefully omitted one particular detail surrounding ideas that I now wish to consider with you: ideas are meaningless if they cannot be (and have not been) shared. We have now set foot upon the final, most agonizing step of the writing process: bidding one’s work farewell. Allowing a work to venture into a public sphere is, for two reasons, quite unnerving. First, writers no longer exert power over their creation for it does not belong to them anymore; it is out there, in the reader’s mind, becoming something “more,” a mix of the writer’s associations and the associations they, in turn, trigger in the reader. Second, allowing others to read our work demands we open ourselves up to criticism, make ourselves vulnerable in a way. However nerve-racking it is at times to conceive of and raise an idea, helping it grow into a full-fledged and freestanding essay, then free it into the world to fend for itself, it remains an exciting, stimulating process. I would go so far as to say it is addictive. In this spirit, many thanks to those of you who have taken the time to visit and rummage your way through my blog in the past four years. I am also grateful for the constructive feedback some of you have sent me via electronic mail. And on this note, to one more Kerfuffle of a year!


* To be sure, clinical psychologists boast an arsenal of instruments at their disposal, mainly for diagnostic purposes. However, diagnostic tests in psychology do not allow psychologists to “diagnose” in the typical, medical sense of the word.

For the most part, diagnostic tests in psychology (e.g., the Beck Depression Inventory; BDI) identify a series of behaviors that are statistically correlated with one other; dub these behaviors “symptoms” individually and “disease” collectively; determine whether any of the testee’s behaviors match any of the behaviors defined as symptoms; and, given a pre-determined number of positive responses, allow the tester to conclude that the testee suffers from a disease. “To logicians,” Greenberg quips while discussing the flaws inherent in such a system of diagnosis, “this is known as assuming your conclusion as your premise, or begging the question” (2010, p. 129). Conversely, diagnostic tests in medicine, which do not involve circular logic, find evidence of disease independently of symptoms: “A good doctor would never conclude that a person with a sore throat and fever necessarily has a streptococcal infection, and a good scientist would not say that the disease of strep throat is constituted solely by a sore throat and fever. Both would insist that a bacteria must be present to complete the diagnosis” (p. 63). At present, the bacteria equivalent of depression (i.e., a definition of depression that does not include a description of what it is like to experience depression) does not exist, to say nothing of a diagnostic test that can accurately tell if a person has fallen ill with this entity.

As hinted by Greenberg, underlying, for instance, the BDI’s circular logic is confusion regarding the difference between disease and symptoms. In medicine, “the symptoms of the disease are only the signs of the disease, not the disease itself. In psychiatry, the symptoms constitute the disease and the disease comprises the symptoms” (Greenberg, 2010, p. 63-64). This confusion was earlier echoed by psychiatrist Thomas Szasz, who pointed out: “The term pneumococcal pneumonia identifies the organ affected, the lungs, and the cause of the illness, infection with the pneumococcus. Pneumococcal pneumonia is an example of pathology-driven diagnosis. Diagnoses driven by other motives [Szasz here refers to the diagnoses and motives of psychiatrists] generate different diagnostic constructions, and lead to different conceptions of disease” (1974/2010, p. 277). Tellingly, the problem surrounding the proper usage of the terms “disease” and “symptoms” in mental health is even acknowledged by the authors of the Research Agenda for the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-V; Kupfer, First, & Regier, 2002): having conceded that “the field of psychiatry has thus far failed to identify a single neurobiological phenotypic marker or gene that is useful in making a diagnosis of a major psychiatric disorder” (p. 33), the authors go on to predict that “[once] it is possible to define a mental disorder based on the identification of its underlying pathology [a prediction that is based on the convenient reasoning that “proof must be coming,” which, incidentally, also begs the question: why are we speaking as if the proof already existed?], then it would surely make sense to follow the course of other medical conditions and have the presence of disorder be based solely on pathology and not on the effect this pathology exerts on the individual’s functioning” (p. 208).

On top of rendering diagnostic tests in psychology (not to mention the DSM itself) utterly useless by reducing them to complicated labeling machines (as opposed to the explanatory instruments these tests are modeled after), the confounding of disease and symptom in psychology often leads to improper use of language (which Szasz would likely qualify, depending on the speaker’s intentions, as base rhetoric). For instance, psychologists will often utter nonsensical diagnostic statements like “Martha is delusional because she has schizophrenia.” Given schizophrenia includes delusional beliefs in its definition, this statement amounts to a tautology, redundantly stating the same idea twice, only in different words. Conversely, a similarly structured statement in medicine makes perfect sense: “Martha has a throbbing headache because she has a tumor lodged in her brain.” A tumor is not a headache, and so it is acceptable to say that one is responsible for Mary’s headache.


Coming soon.

No comments yet. Be the first!

Leave a Reply

%d bloggers like this: