Posts Tagged ‘Philosophy’


The following essay was started about three years ago and finally finished only recently. As you might notice, it is permeated with a certain feeling of anger regarding my chosen profession of applied psychology, a feeling that was originally sparked over the course of my university training (which is when I first sat down to write this essay). The reason for this feeling, which has mainly to do with what I perceive as rampant lack of skepticism within the field of psychology, is acknowledged and elaborated upon in the body of the text. Given I have had the opportunity to work, since graduating and obtaining my license to practice psychology, with colleagues who do not treat ideas in our field as hard-and-fast truths, but as objects open to discussion (which may or may not invalidate them), that feeling of anger has, in time, largely subsided. Although I finished writing this essay in another emotional state of mind than when I started writing it, I have chosen to leave intact whichever angry feelings seeped through my words when I first typed them—for the most part, to avoid invalidating my past self, but also to avoid invalidating my future self, which I am sure is not immune to a resurgence of frustration regarding his well-loved (another, complimentary feeling I hope comes through in my essay) profession.


A Heck of a recently celebrated its fourth birthday. On January 14, 2015, to be precise. During our first four years of “operation,” more than 35 short- to full-length articles exploring a variety of subjects were published. Further, we played host to more than 2,000 views per year from online travelers. In honor of this milestone, I thought I would share a little bit about what I learned on my first literary venture into cyberspace.

I would have loved to tell you that this blog was born out of sugar cups and rainbows. Alas, its origins are a tad more somber. The idea to develop my own blog, you see, was actually the product of frustration. Two years into my doctoral program in clinical psychology, I was exasperated. I had begrudgingly realized that the mandatory courses I was made to take embodied, at best, a forum for only a limited number of ideas and narrow discussion. Further, it appeared that fellow students were all too content feeding on the particular brand of knowledge professors were choosing to dish up. Likewise, professors seemed to have done the same during their own education, since none of them could explain to me how they had come to their conclusions, as if conclusions did not flow from underlying premises, instead spontaneously materializing out of la vérité itself.

Intent on compensating by fashioning my own education, I started to read. A lot. Books on the history of psychology and psychological theories, to find out how we got to where we are and if we happened to miss or leave behind important insights into human behavior. Books on alternate kinds of psychotherapy, because the training offered to me was either too esoteric (i.e., psychodynamic therapy) or too rigid and presumptuous (i.e., traditional cognitive-behavioral therapy). And books putting into question the very state of clinical psychology, to highlight the limits of knowledge we have come to consider absolute. Before long, my brain predictably began to writhe with ideas. But what to do with all of them? Instead of keeping my thoughts all to myself, I decided I would start writing them down, maybe even organize them into cohesive essays and share them with others. And such is how A Heck of a Kerfuffle was conceived. (Because I am not only passionate about psychology, I decided to broaden the scope of my new blog to cover other areas of interest, such as gastronomy and cinema. For the purposes of this essay, however, I choose to focus only on the process of writing about ideas in psychology.)

Before I go into the essays themselves and what writing them taught me about myself, I would like to recount to you how they came to find a permanent home on the Internet in the first place. Those of you who know me know that I am not particularly proficient at computers, and so learning to build my very own website represented a daunting task. Upon consulting, I was instructed to: find a web host, purchase a domain name, download web software to create my website, and design it to my liking. While all this makes sense to me now, getting there took some work. Going through the process, I have come to understand it in the following, less technical terms: purchase yourself a plot of Internet land, name your new domain, have builders erect your house, and furnish it to your liking. And voilà! You have got yourself a brand new home in Cyber City for everyone around the globe to visit. While I’m at it, I would like to thank Web Hosting Hub for helping pop my digi-cherry, and for providing consistently stellar service!

As you may have gathered from reading my first essays on my experiences as a clinical psychologist in training, I am skeptically inclined. Because I have elected to become a “mental health specialist,” this questioning attitude primarily manifests itself around the subjects of human behavior (or how to best understand it) and the modification of human behavior (or how to best go about it). In fact, some of my favorite insights from the past five years of clinical training and practice as a psychologist concern these particular issues. While I would describe my relationship with my chosen profession as marked with caution, even downright suspicion, do not get me wrong: I love what I do. I just refuse to let that affection dull my critical faculties and intellectually blind me.

I should specify at this point I do not advocate fanatical skepticism, or disbelief for the sake of disbelief. On this point, I am in agreement with mathematician Henri Poincaré (1901), who noted: “To doubt everything or to believe everything are two equally convenient solutions; both dispense with the necessity of reflection” (p. xxii). That being said, I believe skepticism encourages reflection more so than other attitudes: in fact, thinking that something might likely not be true incites one to consider the evidence for why it might actually be true, more so, at least, than thinking that that something is very probably true in the first place. This, of course, presupposes that the skeptic is interested, to begin with, in knowing the truth, regardless of whether or not it accords with his own expectations. To reiterate, from an experiential standpoint, the sense that “This might not be true but I want to know if it is” provides, in its underlying tension, impetus toward reflection, more so than the “tension-less” sense that “This is probably true but I want to know if it is.” To be sure, some of us may adopt a skeptical stance because we do not want something to be true, in which case we might not be motivated to consider the evidence against it. This, however, is an example of skepticism misused, in that its purpose is to service our own biases, not the search for Truth. Thus, skepticism, to be epistemologically fruitful, must always be coupled with a desire to expose reality.

Writing in 1925, the father of psychoanalysis, Sigmund Freud noted: “When instructing our own disciples in the theory of psychoanalysis, we always observe how little impression we make on them in the beginning. They accept the analytical teachings with just as much equanimity as any other abstractions which have been fed to them. Some of them may have the earnest desire to be convinced, but there is no trace that they ever really are convinced” (p. 64). It appears times have changed. In contrast to Freud’s observations, I noticed, over the course of my training as a therapist, that many of my classmates wholeheartedly accepted all that modern clinical psychology is and has to offer: clinical psychology, they implicitly insisted, is an infallible science, so why think twice about it? It is, specifically, just like medicine, and so should command the same sort of deference. Psychologist Gary Greenberg observed a similar trend, only from a professor’s point-of-view, and identifies a “top-down” process of misguided attitude transmission: “[Most] students seem oblivious to the crucial epistemological problems that haunt their discipline. Their education continues to consist of largely technical training based on the assumption that they are “doing science”” (1997, p. 257).

I, for one, refuse to take part is such unrestrained naïveté. I understand we want to feel like what we do is important—and it can sometimes be—but I cannot merrily ride along the medical bandwagon. However much we may couch clinical psychology in the language of science, clinical psychology is just not medicine. Namely, what we refer to as “mental illness” has very little to do with actual, physical illness. Likewise, what we refer to as “psychological treatment” has very little to do with actual, medical intervention. Indeed, in both instances, any similarities can best be described as specious. As such, the usage of medical terms like “illness” or “treatment” to describe the problematic behaviors we seek to change in our clients, and the conversations we use to help our clients change, is simply misleading. As critics have leveled against us for decades, our brand of science is decidedly “soft,” whereas medicine’s is anything but.

Psychologist Jeffrey Kottler (2010) tells of “writers who believe that therapy, as a profession, could quite legitimately be housed in an academy of dramatic arts instead of a school of education, health, social work, medicine, or liberal arts. In this setting, therapists would speak of their craft as professional conversation, strategic rhetoric, or even a genre of interactional theater” (p. 297). I am inclined to agree: psychological treatment is actually reflective, didactic conversation about mainly moral issues, and thus very unlike actual, medical treatment (to find out more on the difference between mental and physical cures, see Szasz, 1987). On the subject of psychotherapy, the philosopher Jean Paul Sartre even went so far as to claim that “[there] is philosophy, but there is no psychology. Psychology does not exist; either it is idle talk or it is an effort to establish what man is, starting from philosophical notions” (cited in Rybalka, 2002, p. 245).

I understand it is hard to enter a helping relationship when so much of what we do is so, for lack of a better word, fuzzy. But I believe it is imperative that we acknowledge this in order to do good work; otherwise, we are just being pretentious, when humility—my own values tell me—should drive us as responsible and effective vocational helpers. Renowned therapists themselves identify lack of humility as not only inimical to successful therapy, but also responsible for their worst therapeutic failures (Kottler & Carlson, 2002). Indeed, thinking of ourselves as purveyors of steadfast truths about life and how to best live it can have catastrophic circumstances: for instance, when we deny, ignore, or invalidate our clients’ own truths, we risk threatening the integrity of their very selves (Rowe, 1994). Thus, because so much of what we do is so (let us settle on) nebulous, I believe it is imperative that we continually, actively reflect on our profession’s professed tenets, never blindly following them, while always keeping in mind the limitations of those tenets we have settled on and chosen to abide by.

In the introduction to his no-hold-barred book on the evils of psychotherapy, aptly titled Against Therapy, ex-psychoanalyst Jeffrey Masson (1994) specifies: “The fact that some psychotherapists are decent, warm, compassionate human beings, who sometimes help the people who come to them, does not shelter the profession itself or the practice of that profession from the criticism I make in this book. It only means that they function in this manner in spite of being psychotherapists, and not because of it” (p. 41). While I do not share many of Masson’s conclusions regarding the ethics of psychotherapy, I do agree with him on one point: that those psychologists best suited to help clients are probably those who do not take their profession—in its current state, at least—all that seriously. Some authors (e.g., Engelhardt, 2004) have even gone so far as to suggest that psychologists should, for their clients’ sake and benefit, pretend as if what they know to be true about human behavior (and, more precisely, misbehavior) is not true at all!

While I do not believe that clinical psychology ultimately amounts to a science (although particular helping strategies can certainly be studied scientifically), but more of a philosophically flavored art, many of my colleagues would respectfully disagree. However, even if we were to agree to call what we do science, the philosophy underlying what we do could still not be denied. As philosopher Daniel Dennett (1995) reminds (actual) men and women of science, “[scientists] sometimes deceive themselves into thinking that philosophical ideas are only, at best, decorations or parasitic commentaries on the hard, objective triumphs of science, and that they themselves are immune to the confusions that philosophers devote their lives to dissolving. But there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination” (p. 21; emphasis added). And so, regardless of whether we believe we are “mental health scientists” or simply life coaches, the philosophy of psychology cannot be denied. In fact, it is essential it be ascertained.

But, why bother thinking about the philosophy underlying clinical psychology (or any other human endeavor, for that matter)? Since I could not possibly say it any better if I tried even really hard, I call upon philosopher Bertrand Russell to tell us why:

Because “[the] man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the cooperation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find […] that even the most everyday things lead to problems to which only very incomplete answers can be given. Philosophy, though unable to tell us with certainty what is the true answer to the doubts which it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never travelled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.” (1912, p. 91)

Psychotherapy is one such “common object,” which, once prodded with the rod of philosophy, quickly reveals itself (for many, at least) to be something somewhat different from what it is normally made out to be. I will later discuss how the common object of “mental illness” meets a similar fate as psychotherapy when handled using philosophy.

The psychoanalytically minded among you will no doubt attempt to discount my cantankerous attitude toward clinical psychology by postulating potential biographical causes that may have brought it about in the first place. Perhaps I have time and again been hurt and disappointed by objects I thought I could count on, I can no longer trust those that presently inspire affection in me. Even if that interpretation were true, the “etiology” of a belief system, as psychologist William James (1901) takes great care to explain in his seminal qualitative study of the religious experience, has absolutely no bearing on its accuracy. After all, what belief claims no psycho-historical causes or reasons whatsoever as precursors (beyond a probable epistemic desire “to know,” that is)? For instance, some historians have, according to astronomer Carl Sagan, said of Isaac Newton that he “rejected the philosophical position of Descartes because it might challenge conventional religion and lead to social chaos and atheism” (1996, p. 258). But how “Newton was buffeted by intellectual currents of his time […] has little bearing on the truth of his propositions.” Sagan goes on to describe a similar attempt to discredit aimed at Charles Darwin, one that commits the added mistake of confusing cause and effect. Having said all of this, there is some validity in the hypothesis that my suspicions are at least partly grounded in my past. As a child, I was asked to assume the truth behind countless religious postulates regarding how the world works and how to best direct human behavior; when I started to think, I noticed none of them—to my satisfaction, at least—did the explanatory or regulatory job they were specifically fashioned to do. This left a sour taste in my mouth: I felt misled by authority figures I trusted.

It is then that I learned the power of independent, critical thinking in shaping my own understanding of the world. (I appreciate the value of suspending thought and just having faith, but I cannot endorse faith as a standalone life philosophy. Since any belief can be accepted as true based on faith alone, all faith-based beliefs are consequently equally true. In other words, faith alone cannot make the probably true rise above the probably not true. You need reason for that.) Historically, no knowledge (except when resting on faith) has ever deserved the qualifier of “ultimate.” After all, knowledge builds upon itself away from non-adequate accounts, arguably ad infinitum (an observation made by the philosopher of science Thomas Kuhn, in his seminal book on the titular structure of scientific revolutions.). In fact, during my very first year of university, my History of Psychology professor likened the idea of “facts” to a modern fiction, in so far as facts are final and should not technically evolve. And so, I do not see the value in assuming that what I hold to be true now will necessarily be true 50 or 100 years from now.

Some of you may be thinking at this point: “Why believe in anything if it is just a stopover on our way to Truth?” First of all, what we currently hold to be true may not necessarily be a stopover at all, but actually the final destination. However, only time will be able to tell us this (not in a decisive sense, to be sure, but in a probabilistic sense, i.e., in the progressive accumulation of evidential weight). In fact, all knowledge, be it in Science (or Philosophy), is tentative to some degree, being only one body of counter-evidence (or counter-arguments) away from modification or downright withdrawal. Second, there are presumably units of knowledge that stand closer to the final destination than others, or at least stand a better chance of leading us toward it as opposed to some epistemological dead-end. Thus, I am not advocating that we not hold on to present-day knowledge. After all, without intellectual markers to situate us, we would flounder about the world directionless. What I mean to say is: we should simply hold on to this knowledge with a loose grip (a move which can also make it easier to reach for the next epistemic monkey bar). (For simplicity’s sake, I will continue to use the terms “fact” and “truth” [and all variants thereof], but in the tentative, “as can presently best be understood” sense of the words. With one exception: when “truth” is capitalized, it is meant to evoke ultimate Truth, that which scientific revolutions rotate toward.)

Chapter I – In the Company of Ideas

In writing about ideas, I have continued to learn more about my receptivity toward them. As briefly mentioned above, however, my reluctance to accept everything I am told as automatically true is a part of me I have already been aware of for some time now. What has been more of a revelation is how I as an individual engage with immaterial (but still very real) perceptions like ideas. I have to admit: relationships with ideas can be tricky. You may fall immediately in love with one or it may grow on you in time. Once you have selected a suitable idea for possible “appropriation,” you give yourself to it completely: it is yours, you are its. Eventually, you may reluctantly come to think you can somehow make it better, only to find out it stubbornly resists change. Frustrated, you may begin to seek out or simply let yourself be seduced by a more attractive idea. However heart-breaking the thought, you may decide to ditch your previous idea, because—as you repeatedly tell yourself as if to assuage some kind of doubt—“This new one is it.” Come to think of it, idea selection bears a striking resemblance to mate selection!

For this reason, I have learned to be more cautious when considering adopting ideas. That is not to say I have not also had to learn how to be more tolerant of uncertainty when ideas are being considered (or dated, if you will). Greenberg (2010) counsels: “[When] it comes to important and complex questions, the best approach is to leave yourself in doubt for as long as possible, to live with inner conflict rather than to end it, to withstand yourself rather than to become someone different, to understand you arrived at an important juncture rather than strike out down a road simply for the sake of getting on with life” (p. 7). Adopting a hopeful view of this emotionally arduous fact-finding process, physicists Laurence Krauss even predicts “[lack] of comfort means we are on the threshold of new insights” (2012, p. xv). And so, I never abandon an ostensibly sound idea that has managed past my skeptical defenses without giving my relationship with it every chance it deserves, however trying what lies ahead. I remain attuned to counter-arguments opposed to my idea. I evaluate their cogency. I revise my own arguments in support of my idea to address the counter-arguments. If an idea repeatedly fails to stand its own during challenges or keeps resisting improvement, I (I must admit, halfheartedly) abandon it, perhaps even for the very idea that brought about its downfall.

An example: my earliest memory of me participating in a college class—Introduction to Sociology—sees me arguing against the idea of ethnocentrism by appealing to a universal moral standard. However much I enjoyed the thought of absolute moral judgments, I eventually came to question their existence. (While it may sound contradictory, I do not, in any way, advocate normative moral relativism, whereby any behavior should be tolerated simply because there exist no objective behavioral standards in Nature.)

While learning to live and getting acquainted with ideas, I have also learned a great many things about them. First off, just because an idea sounds counterintuitive does not mean that it is somehow going against Truth. On the subject of moral behavior, Russell once commented: “[Conscience] is a most fallacious guide, since it consists of vague reminiscences of precepts heard in early youth, so that it is never wiser than its possessor’s nurse or mother” (1901, p. 74). Likewise, intuition is no more informative in guiding beliefs. Because intuition rests on knowledge of what Truth should be, as perhaps outlined by prior education, it does not necessarily orient us toward that which is definitely True. Experience is, in turn, no more reliable a guide. Speaking of clinical experience, self-avowed Freud basher Frederick Crews (1995) concludes: “Standing alone, [it] is not a probative tool but an inducement to complacency and tunnel vision” (p. 7). Moreover, I have learned that while I will not assume the truth of the status quo, I will always remain open to it being true (or at the very least a right step toward True).

However demanding thinking about ideas may be, writing about them poses different sets of challenges. Whereas there is no end to thinking (you can keep doing it for as long as you want or possibly can), there is one to writing (provided you want to share your writings at some point). Since I rarely write about ideas I am completely done thinking about, this final quality to writing can be problematic. Further, the very attempt to translate shapeless ideas into definite symbols may change one’s understanding of them. Screenwriter Charlie Kauffman, in a 2008 interview for the Writers Guild of America, explains: “Part of the thing that happens when you’re writing, especially when you’re writing one piece over an extended period of time, is that you have an evolving understanding of the world and an evolving understanding of the piece. And so, if you’re trying to be truthful, you start out with one idea, and as you become more familiar with it, or explore different aspects of the idea, different things become revealed to you, and you have to incorporate that. That becomes a bit of a hindrance when you’re writing, but I guess that’s the way I like to write.” And that is the way I like it too.

Chapter II – In the Company of Ideas that are (Probably) True

I have thus far discussed Truth with no mention of its nature, and so will say a little bit about it now. Jeff Winger, the self-assured protagonist from the television series Community, puts it this way: “The biggest truths aren’t original. The truth is ketchup. It’s Jim Belushi. Its job isn’t to blow our minds. It’s to be within reach” (2010, E14/S1). I agree with Winger’s first statement: many observations are self-evident (i.e., very unlikely of ever being discounted by any new evidence), truisms that barely need stating. That being said, I disagree with his subsequent statement: truths are not always necessarily easily discernable. As Algernon replies to Jack in The Importance of Being Earnest, “[the] truth is rarely pure and never simple” (Wilde, 1899/1990, p. 6). Presuming that Truth exists independently of human perception, it may then not always be easily apprehensible by the mind. Because Truth does not exist for us, its job cannot possibly be to exist in such a way as to always be ascertainable, as if tailored for our intellect. In fact, Truth holds no purpose; it just is, and what it is cannot, unfortunately, always be within reach. Discerning Truth, instead, oftentimes requires painstaking effort. It is us that must adapt ourselves to it, not vice-versa. As Krauss remarks, it is sometimes necessary that “we expand our horizons because nature is more imaginative than we are” (2012, p. 77). Miss Giddens, in The Innocents, is even warned by her employer that the “truth is seldom understood by any but imaginative persons” (Clayton, 1961).

Presuming there are things to know about our universe, what are the best ways to discern them? I have talked about how I prefer not to rely on potential indicators like authority, intuition or experience. Other classic aids are Science, Reason, and Faith. (I make a distinction between Science and Reason for reasons that will become clear shortly.)

As many of you already know, Science only concerns itself with that which can be falsified (i.e., determined to be untrue via observation or experimental testing). For example, Science can readily assess the statement “tortoises are faster than hares,” because its opposite can easily be measured. In this way, the investigative scope of Science is fairly limited. Although Reason is inherently part of the scientific process, Reason can also be used on its own to assess statements that cannot be falsified. For example, although the statement “Life is actually a dream” cannot possibly be proven to be untrue, Reason can show it to be very improbable (for a compelling argument, see Russell, 1912). (The latter statement is presumably either True or False; it is just that we cannot “know” the answer scientifically, but merely approximate it logically.) In fact, Reason is the primary tool of Philosophy, and so is used to answer questions stemming from every one of its branches, questions that Reason via Science oftentimes cannot touch (or touch as persuasively) because of its strict falsifiability requirement (see Klemke, Kline, & Hollinger, 1994 for more on the difference between questions “fit for Science” and questions “fit for Philosophy”). For example, while Science can effectively judge secondary religious beliefs (e.g., “The Earth is 6,000 years old”), only Philosophy can unrestrictedly tackle the primary belief in a deity itself (Piggliucci, 2009). In this way, the investigative scope of Reason is fairly large, if not unlimited. Like Reason, Faith can technically assess any variety of statements, falsifiable or not. For example, one could take it on Faith that tortoises are faster (or slower) than hares, or that life is actually (or not actually) a dream. In this way, the investigative scope of Faith is as large and potentially unlimited as that of Reason. (One might even say that Faith is superior to Reason when trying to find out what is true: whereas Reason can only show an un-falsifiable statement to be probably true or false, Faith can claim it to be conclusively True or False. Unfortunately, as we shall see, Faith can also show it to be both True and False, which most of us do not consider a helpful conclusion, or a conclusion at all…)

While both Reason and Faith boast equally impressive scopes of enquiry, they are by no means equal aids when it comes to actually making out Truth. Reason (whether applied within the realm of Science or Philosophy) remains most helpful because it can show certain statements to be very probably false or truer than others. In other words, it may organize statements alongside a continuum of Truth. Faith, on the other hand, can accept anything as definitively True. For example, one person could take it on faith that “God created the universe,” and another could take it on faith that “A giant, impossibly pink and fluffy bunny-rabbit created the universe.” From a Faith-based perspective, both people would be right, which is unlikely given both aforementioned beliefs cannot be true at the same time (unless, I suppose, God is an enormous, colorfully furred rodent). Statistically speaking, then, Faith is too liberal; in other words, it is associated with too great a risk of false positives. To be fair, some claim that Faith’s purpose is not to know Truth. Nevertheless, those who adopt beliefs based on Faith assume it has oriented them toward an accurate belief. Funnily enough, even people who insist Faith is sufficient when selecting beliefs (e.g., “God created the universe”) will admit that some faith-based beliefs (e.g., “A giant, impossibly pink and fluffy bunny-rabbit created the universe”) just do not make sense. Thus, while they ultimately do value Reason, they just do not think Reason is necessary to support their own beliefs. That is why the smarter among those who initially adopt a belief based on Faith alone ultimately succumb and resort to Reason to validate these. Take, for example, the numerous logical arguments (e.g., Rachels, 2002) for the existence of God, usually taken on Faith alone. The battle over whether these arguments are cogent is here waged in the realm of Reason, because Faith is, as we have seen, always insufficient, and Science is, in this particular case, out of its element (“God exists” is a non-falsifiable statement). (Note, however, that if one appends “and can interact with the physical world” to “God exists,” the latter statement suddenly becomes falsifiable, and, thus, amenable to scientific enquiry; see Stenger, 2009.)

For this very reason are common religious arguments against the trustworthiness of Science—postulating, say, that “Science can be wrong”—embody not a criticism at all, but merely a restatement of its strength, of the reason why it can be so useful. Because Science can reject hypotheses as being inadequate, but never accept any of them as definitively true, Science naturally promotes progress and movement toward Truth. Reason outside of Science, as in Philosophy, can also evolve by way of argumentation. Faith, on the other hand, is inert, deprived of any inbuilt mechanism allowing it to advance away from Error toward Knowledge. To illustrate my point, compare the number of times Science has revised its understanding of nature in the last few centuries to the number of times Religion has revised its understanding of nature in the last two millennia. Close to 2,000 years after the birth of their religion, Christians have only recently begun to seriously consider the possibility that hell, a major element of their belief system, does not exist (Bell, 2011). Science, on the other hand, has not only questioned but also revised its conceptualization of light, one of its own conceptual obsessions, at least three times in the last 350 years: first came particle theory, then wave theory, followed by wave-particle duality (Hawking & Mlodinow, 2010). (Although one can certainly substitute, without resorting to Reason, one faith-based belief with another faith-based belief, it remains impossible to tell, based on Faith alone, whether the new belief is any truer or “falser” than the old one. In other words, trying to understand the world using Faith alone is akin to running a marathon on a treadmill. Whatever you do, you are never behind, never ahead; or at the very least, there is no possible way to know.)

As you may have guessed, I favor Reason (in the form of Science or Philosophy) when attempting to understand the world around me. I will now discuss some of the products of my reasoning over the last few years, in regards, specifically, to psychology.

Chapter III – In the Company of Ideas in Psychology (Part I)

We saw earlier that “etiology” can never determine the accuracy of a belief, because every belief has causes. I argue that in some cases, neither do its consequences. (Note that James, an ardent pragmatist, would have disagreed with this.) When determining whether a belief is true or not, I believe the effects of maintaining that belief have absolutely no bearing on its truth-value, since some truths presumably exist independently of the effects of believing in them. For example, it is not unreasonable to assume that dogs probably exist regardless of whether believing in dogs is helpful or harmful to humans. Now, that may sound silly, but many people express beliefs that, when translated using canines, sound a little bit like this: dogs must exist because dogs make humans less lonely. For example, some defend the existence of God by claiming that without belief in Him, society as we know it would crumble into chaos. But the effect of not believing in God has absolutely no bearing on whether He actually exists. Such people are confounding two debates: the existence of God, and the effects of believing in someone like Him. In short, there is a difference between the veracity and the utility of an idea, two characteristics that are often confused when attempting to demonstrate the former.

Many psychologists commit such a logical mistake when defending their own ideological beliefs (namely, their preferred therapeutic approach). To be precise, they commonly interpret the proven efficacy of a given therapeutic technique as indicating the truth of its underlying premises and postulated entities. If their brand of therapy happens to have more positive outcomes then other brands of therapy, then that must mean their approach is based on fact, and that they are justified in using it when helping clients. In doing so, many therapists “draw hasty conclusions between symptom abatement and interpretation” (Crews, 1995, p. 117). As we have seen, that habit is misguided.

Other psychologists commit a similar mistake when interpreting the finding that all therapeutic approaches are actually equally effective in relieving life difficulties (dubbed the Dodo-Bird Verdict), and that therapeutic approach actually plays only a small role in achieving this outcome, as indicating that all approaches are equally (or unequally) valid. In other words, psychotherapy is really a free for all: simply pick the one you happen to like best or borrow from here and there. To be sure, the Dodo-Bird Verdict has been the mark of much debate, but let us assume for a moment that this is in fact true, that all forms of therapy are equally effective in relieving life difficulties. Does that mean that we, as professional helpers, are warranted in using or sampling from any one of them to help our clients? Provided one values Truth over Deception, not at all. The Dodo Bird Verdict only extends to therapeutic outcome (or value), not therapeutic veracity. (I regret to inform those of you who advocate therapeutic eclecticism so as to avoid thinking about the philosophy underlying each and all therapeutic approaches, that the best eclectic therapists will only borrow from approaches that share the same core philosophical assumptions, but who suggest different strategies based on these [Neimeyer, 1995].)

As I alluded to earlier, an idea should ideally be evaluated via itself, and not via ourselves; otherwise, we are simply evaluating the effect of believing in this idea, as opposed to the idea itself. That is what the Dodo Bird Verdict amounts to: a conclusion as to the effect of an idea, as embodied by a particular approach, not a conclusion as to the legitimacy of this idea, or the approach itself. Even Wampold and his colleagues (1997), who assessed and confirmed the Dodo Bird Verdict, take care to mention in the title to their article that “all must have prizes” only “empirically.” (By empirically, I interpret the authors as meaning, “as far as observable effects are concerned.”) That is, from a wider truth-seeking perspective, all do not necessarily get to take those prizes home. Thus, the actual soundness of a given therapeutic technique cannot be determined via its efficacy, but via the accuracy of its premises and the entities that those premises engage.

Now, whether Efficacy should trump Truth in therapeutic settings is for you to decide. To be sure, therapy is meant to be helpful, but it is also expected to be truthful (i.e., based on the best knowledge available). To help me demonstrate, consider the following: if a given lie-spewing cult helps make people happy, does that make it a reasonable way of helping people? If you have answered in the negative, then you have no business conducting whichever brand of therapy you favor based solely on the fact that it has been proven to or may possibly be effective. You are only justified in conducting it if you have critically assessed the theory behind your approach to see if it stands up to Reason. For example, if you are a psychodynamic therapist, you must believe in and be able to defend the statement: “nothing in reality is ever what it seems.” (You would think that, as a skeptic, I would admire psychodynamic theory; unfortunately, such theory advocates a fanatical sort of skepticism, where no clear paths toward reliable insights are laid out except those paved by “experts.”) Further, one should be able to explain why, despite the fact that no “distinctly psychoanalytic notion has received independent experimental or epidemiological support—not repression, not the Oedipal or castration complex, not the theory of compromise formation, nor any other concept or hypothesis” (Crews, 1995, p. 298), one is still justified in speaking of these as if they were real. Cognitive psychotherapy may enjoy a better reputation nowadays and put forth and into play self-evident truths like the existence of thoughts, but that by no means exempts it from critical philosophical consideration. Likewise, if you are a second-wave cognitive-behavioral (CB) therapist, you must believe in and be able to defend the statement: “reality exists and has been decisively and irrevocably quantified.” After all, without a discernable (and already discerned) reality, there can be no such things as cognitive distortions of said-reality, and there remains nothing with which to realign a client’s mistaken subjectivity. (Third-wave CB therapists overcome this philosophical hurdle by embracing human subjectivity, even its unpleasant manifestations, without seeking to modify it.) If you cannot defend either of these arguments via non-fallacious means, yet are still conducting either brand of therapy, then fortunately for you: you have some thinking to do!

Arguing against a pragmatic view of religion, Russell (1901) confessed: “I can respect the men who argue that religion is true and therefore ought to be believed, but I can only feel profound moral reprobation for those who say that religion ought to be believed because it is useful, and that to ask whether it is true is a waste of time” (p. 197). Likewise, I can respect the men (and women) who argue that their brand of psychotherapy rests on sound philosophical premises and therefore ought to be practiced, but I can only feel profound moral reprobation for those who say that their brand of psychotherapy ought to be administered simply because it is useful.

Chapter IV – In the Company of Ideas in Psychology (Part II)

Another idea I have become quite infatuated with is the idea that mental illness does not exist. I am not going to concern myself here with the reasons why I believe this idea to be cogent, but with a common mistake people make when trying to prove me wrong. People often tell me mental illness must exist since diagnoses give people comfort. That is yet another example of mistaking value for veracity. I am not concerned with the effects of believing in mental illness, but whether it actually exists or not. Because I have concluded, by weighing the arguments I have come across until now, that it does not, I believe it would be irresponsible—not to mention disingenuous—for me to pretend as if it does, simply to assuage my clients’ distress. (Should you be curious, the effects of labeling are both positive [e.g., Angermeyer & Matschinger, 2005; Deacon & Baird, 2009; Hayne, 2003; Laegsgaard, 2010; Murrie, 2005; Murrie et al., 2007; Wright et al., 2007] and negative [e.g., Angermeyer & Matschinger, 2003, 2005; Deacon & Baird, 2009; Hayne, 2003; Kleim et al., 2008; Lloyd et al., 2010; Schomerus et al., 2010].)

Let us say I were to conclude that mental illness does, in fact, exist. Should, then, I resort to psychiatric diagnoses when explaining my clients’ experiences to them? I should think so. Unlike me, however, many of my clinical colleagues have actually come to the conclusion (or simply accept) that mental illness is real, yet somehow still debate whether to communicate their diagnoses to their clients. That is, to put it bluntly, pure hypocrisy. (A laughable type of hypocrisy, since many of those same people will tell me I am being irresponsible for not telling my clients that they have a given mental illness, an illness that I, unlike them, do not even believe in!) Currently, the debate surrounding diagnostic labels concerns whether we should use them, not whether they embody something real. Under the pretense that diagnoses of mental illness may cause stigma, some clinicians decide not to report them to patients, opting for less reductionist, more humanistic terms instead. This anxiety surrounding labels has led to many psychologists becoming two-faced, speaking the language of humanism with clients, while thinking about and discussing clients with their colleagues using the language of psychiatric reductionism.

This is all plain silly. If mental illness exists, it follows that patients should always be diagnosed with whichever illness they appear to suffer from! If mental illness is just like any other illness, then it does not matter in the least bit whether learning that one is mentally ill will impose emotional hardships. After all, have you ever heard of a doctor debate whether he should label his patient as having HIV, because he may be discriminated against on account of his infection? Of course not! Diagnoses of physical illness may be hard to take, but we still give them, because they accurately represent what is happening to a patient. And so, if psychiatric diagnoses are the same as medical diagnoses, it follows we should always give them to patients, even if learning that one is mentally ill will hurt.

Chapter V – In the Company of Ideas in Psychology (Part III)

I have argued that psychotherapy should be based on fact. Yet, at the same time, I implied earlier that psychotherapy is a predominantly value-based endeavor (compared, that is, to medical intervention, which, to be sure, also involves value-based decisions, but not to the central extent found in psychotherapy) that rallies rhetorical, relational, and experiential processes (as opposed to medicine’s use of basic speech and physical instruments*) in service of its moral aims, which center around existential-humanistic matters like “what people do” and “what people ought to do” (as opposed to physicalist matters like “what the body has,” in the case of medicine). Values, of course, are not scientific entities, nature being morally uniform. This begs the question: if we are willing to tolerate the application of values in psychotherapy, why not also enlist other fictional constructions, such as psychodynamic or psychiatric entities like the unconscious or mental illness? More succinctly, can psychotherapy ever truly be based only on fact?

The goal of psychotherapy is typically to increase psychological wellbeing. (I use the term “wellbeing” here instead of “health,” so as to avoid any unnecessary confusion between the two, broad concepts of “optimal behavior” and “optimal body.”) This goal involves the following value-statement: wellbeing is more desirable than its opposite. This statement, however, is not grounded in science and thus should not be considered formal fact. As evinced by the existence of natural disasters and the mere potential for violent behavior, nature does not always have our best interest at heart: it could not care less whether we survive and thrive within its confines, or simply suffer our way through life, only to die a meaningless death in the end. Values come into play not only in regards to the goal of psychotherapy, but also in regards to the pursuit of that goal. A correctional psychologist attempting to rehabilitate a violent offender who is quick to anger, for example, might encourage him to learn to cope with his anger without resorting to violence, because doing so will help him lead a more satisfying life, by, say, not scaring away potential resources. But nature, again, does not care whether we better ourselves or not, whether we behave in such a way as to foster or undermine our wellbeing.

The decision, in therapy, that we ought to behave in certain ways, and that we ought to replace certain behaviors with others, will forever be determined by fictional entities (i.e., values). That is, and will forever remain, the nature of therapy. We cannot do anything about that. That being said, it remains possible to favor certain behaviors over others, and to replace those behaviors we do not favor with behaviors we do consider favorable, in ways that are based in objective reality, i.e., in observable phenomena. (Indeed, although “normative judgments cannot properly be regarded as either true or false [,] accepting or rejecting an evaluative judgment [can] depend on judgments that are themselves straightforwardly nonnormative”; Frankfurt, 2006, p. 28-29, italics in original.)

For example, if we wish to help a client increase his wellbeing, we can use observation to tell us which behaviors generally increase wellbeing, and do so consistently, lastingly, and with the least number of harmful consequences. We can also rely on observation to determine the best strategies to use when replacing behaviors that decrease wellbeing with others that increase it. Note here that nowhere in nature is it prescribed that life should be pleasant, but science can still be helpful in telling us how to best accomplish this goal. (On a related note: although the selection of adaptive behaviors, and the elimination of maladaptive in favor of adaptive behaviors, can be based on scientific inquiry, the recommendation that clients live and that therapists practice according to empirically derived knowledge will always be value-based. In the words of philosopher David Hume, 1739, just because a behavior is related to increased wellbeing, or just because a therapeutic strategy is helpful in replacing behaviors that decrease wellbeing with behaviors that increase wellbeing, does not mean that it ought to be enacted.)

Thus, we would be warranted in urging a habitual substance abuser to consider substituting his behavior with another, more happiness-friendly behavior, not because this is what nature intends for that individual to do, but because objective observation tells us that using certain quantities of drugs and alcohol, while creating legitimately pleasant states of mind in an of themselves, increases wellbeing with only relative efficacy (pleasant emotions may be interrupted by unpleasant physical symptoms, or other sources of pleasure may be compromised, like one’s profession), reliability (unpleasant emotional experiences may sometimes, inadvertently and unexpectedly, become more salient), durability (pleasant emotions are fleeting), and sustainability (pleasant emotions may become hollow or require more intense consumption to come about at the same level of potency). Using certain quantities of drugs and alcohol also entails potentially detrimental consequences to others (like family and friends). Moreover, we would be warranted in suggesting to the habitual substance abuser certain change strategies over others, because observation tells us that some strategies neutralize cravings better than others.

Further, misbehaviors targeted for substitution in psychotherapy can always be described and explained in factual, or at very least parsimonious and transparent, ways. Thus we would be mistaken to describe habitual substance misuse as an id-motivated regression to the oral stage of psychosexual development, or as a chronic mental illness, because these concepts, while accurately reflecting the existence of particular behaviors, rely heavily on allegory (which often entails the creation of extraneous entities) to describe and explain these behaviors, and also fail to wear their moral loading on their sleeve (unlike, say, the expression “life difficulty,” which openly acknowledges its value-based underpinnings).


I have discussed en long et en large what I have learned about myself over the past fours years contributing to A Heck of a, while focusing specifically on the topic of ideas in the field of psychology. I have, however, purposefully omitted one particular detail surrounding ideas that I now wish to consider with you: ideas are meaningless if they cannot be (and have not been) shared. We have now set foot upon the final, most agonizing step of the writing process: bidding one’s work farewell. Allowing a work to venture into a public sphere is, for two reasons, quite unnerving. First, writers no longer exert power over their creation for it does not belong to them anymore; it is out there, in the reader’s mind, becoming something “more,” a mix of the writer’s associations and the associations they, in turn, trigger in the reader. Second, allowing others to read our work demands we open ourselves up to criticism, make ourselves vulnerable in a way. However nerve-racking it is at times to conceive of and raise an idea, helping it grow into a full-fledged and freestanding essay, then free it into the world to fend for itself, it remains an exciting, stimulating process. I would go so far as to say it is addictive. In this spirit, many thanks to those of you who have taken the time to visit and rummage your way through my blog in the past four years. I am also grateful for the constructive feedback some of you have sent me via electronic mail. And on this note, to one more Kerfuffle of a year!


* To be sure, clinical psychologists boast an arsenal of instruments at their disposal, mainly for diagnostic purposes. However, diagnostic tests in psychology do not allow psychologists to “diagnose” in the typical, medical sense of the word.

For the most part, diagnostic tests in psychology (e.g., the Beck Depression Inventory; BDI) identify a series of behaviors that are statistically correlated with one other; dub these behaviors “symptoms” individually and “disease” collectively; determine whether any of the testee’s behaviors match any of the behaviors defined as symptoms; and, given a pre-determined number of positive responses, allow the tester to conclude that the testee suffers from a disease. “To logicians,” Greenberg quips while discussing the flaws inherent in such a system of diagnosis, “this is known as assuming your conclusion as your premise, or begging the question” (2010, p. 129). Conversely, diagnostic tests in medicine, which do not involve circular logic, find evidence of disease independently of symptoms: “A good doctor would never conclude that a person with a sore throat and fever necessarily has a streptococcal infection, and a good scientist would not say that the disease of strep throat is constituted solely by a sore throat and fever. Both would insist that a bacteria must be present to complete the diagnosis” (p. 63). At present, the bacteria equivalent of depression (i.e., a definition of depression that does not include a description of what it is like to experience depression) does not exist, to say nothing of a diagnostic test that can accurately tell if a person has fallen ill with this entity.

As hinted by Greenberg, underlying, for instance, the BDI’s circular logic is confusion regarding the difference between disease and symptoms. In medicine, “the symptoms of the disease are only the signs of the disease, not the disease itself. In psychiatry, the symptoms constitute the disease and the disease comprises the symptoms” (Greenberg, 2010, p. 63-64). This confusion was earlier echoed by psychiatrist Thomas Szasz, who pointed out: “The term pneumococcal pneumonia identifies the organ affected, the lungs, and the cause of the illness, infection with the pneumococcus. Pneumococcal pneumonia is an example of pathology-driven diagnosis. Diagnoses driven by other motives [Szasz here refers to the diagnoses and motives of psychiatrists] generate different diagnostic constructions, and lead to different conceptions of disease” (1974/2010, p. 277). Tellingly, the problem surrounding the proper usage of the terms “disease” and “symptoms” in mental health is even acknowledged by the authors of the Research Agenda for the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-V; Kupfer, First, & Regier, 2002): having conceded that “the field of psychiatry has thus far failed to identify a single neurobiological phenotypic marker or gene that is useful in making a diagnosis of a major psychiatric disorder” (p. 33), the authors go on to predict that “[once] it is possible to define a mental disorder based on the identification of its underlying pathology [a prediction that is based on the convenient reasoning that “proof must be coming,” which, incidentally, also begs the question: why are we speaking as if the proof already existed?], then it would surely make sense to follow the course of other medical conditions and have the presence of disorder be based solely on pathology and not on the effect this pathology exerts on the individual’s functioning” (p. 208).

On top of rendering diagnostic tests in psychology (not to mention the DSM itself) utterly useless by reducing them to complicated labeling machines (as opposed to the explanatory instruments these tests are modeled after), the confounding of disease and symptom in psychology often leads to improper use of language (which Szasz would likely qualify, depending on the speaker’s intentions, as base rhetoric). For instance, psychologists will often utter nonsensical diagnostic statements like “Martha is delusional because she has schizophrenia.” Given schizophrenia includes delusional beliefs in its definition, this statement amounts to a tautology, redundantly stating the same idea twice, only in different words. Conversely, a similarly structured statement in medicine makes perfect sense: “Martha has a throbbing headache because she has a tumor lodged in her brain.” A tumor is not a headache, and so it is acceptable to say that one is responsible for Mary’s headache.


Coming soon.

I remember my very first Astronomy course, which I selected as one of my required “real science” electives during my early education as a “social scientist.” Only a few lessons into the semester, the class tackled Albert Einstein’s famed general theory of relativity. The professor explained the theory using two chairs, a bed sheet and a paperweight. Despite this innovative approach, I left the course embarrassed to have (barely) understood relativity only in terms of furniture, linens and office supplies.

For that very reason, I have since kept my distance from Einstein—until, that is, I grabbed The World as I See It from a local bookshop shelf. The book is a compilation of various works (articles, essays, letters and such) written by the celebrated scientist between the two World Wars. While the original 1949 edition contained science-themed works, the abridged edition I brought home (released in 2006 by Kensington Publishing Corp.) conveniently leaves only content related to human affairs, from the Meaning of Life to International Politics, Pacifism to Judaism.

The World as I See It is a challenging book to review because no central idea emerges. Taken as a whole, however, the compilation reveals the thoughts and concerns of a man who cared deeply about human wellbeing and international harmony. It is fascinating to explore Einstein’s thoughts regarding subjects of relevance to us all, for once allowing an internal discussion with Einstein that does not require familiarity with his specialized set of knowledge. In this spirit, I focus this review both on the themes I felt Einstein developed with particular thoughtfulness, and on my personal reactions.

On Nature and Religion

Einstein describes himself as religious to the degree that he stands in constant awe of the scientifically impenetrable beauty of the cosmos. However, “[to] tack this on to the idea of God,” he grumbles, “seems mere childish absurdity” (p. 104). As Carl Sagan remarks in his seminal Cosmos documentary series (1980; Episode 2: One Voice in the Cosmic Fugue), to explain nature through magical means dampens its majesty. Regarding metaphysical postulates in general (e.g., deities, life after death), Einstein dismisses these as the desperate creations of “feeble souls” (p. 7) blinded by fear and egotism.

The universe humbles Einstein, and his willingness to stand in awe before nature without appealing to supernatural forces embodies his own idiosyncratic religion. From this particular point-of-view, the irreligious life, he believes, much like the unexamined variety, is simply not worth living. I am not certain that the romanticization of nature and our relation to it is completely warranted, yet it is nevertheless reassuring to find that genius can be compatible with spiritual sensibility. Regardless, Einstein’s conceptualization of religion as involving a sense of mystery (as opposed to one of mysticism) certainly sheds new light onto his famous assertion that “science without religion is lame, religion without science is blind” (1940, p. 606).

While I agree with Einstein that religious beliefs are misguided, I certainly do not consider those who entertain such beliefs to be deficient: believers’ “souls,” their inner core as human beings, are not “feeble.” Life inspires innumerable questions, and we all do our best to answer these to the finest of our ability; there is no need to demean or vilify those who settle on different answers. Life can be lived fully even when grounded within inaccurate conclusions, religious or otherwise. For example, a popular piece of secular advice urges us to be optimistic at all times, when realism is probably more sensible (Greenberger & Padesky, 1995).

And so, while I value a factually lived life, that does not mean a fantastically lived life has no value, or is feebly subpar. Those who fail to live factually do not ipso facto fail at life. Some believers, on the other hand, accuse non-believers of neglecting the God-shaped hole in their hearts, based on the degrading assumption that a person can only become “whole” once he or she accepts God into his or her heart. Jesus Christ Himself judged those somehow different from himself (whether in body or belief) to be inadequate. When blind Bartimaeus begged Him to restore his sight, Jesus offered the following words of comfort: “[Thy] faith hath made thee whole” (Mark 10:52, KJV). The man at once regained his sight. This story, whether taken literally or as metaphor, is simply offensive: Bartimaeus, whether physically or spiritually blind, was never not whole. In some translations, “whole” appears as “well,” but that does not make Jesus’ claim any more accurate: physically and spiritually blind people can lead perfectly fulfilling lives. In any case, Bartimaeus’ leap of faith was not curative, as Jesus sought to imply, because there was no “disease” to speak of in the first place.

On Society and the Self 

Einstein reminds us that we are defined by our relationships with others: “[The] individual is what he is and has the significance that he has not so much in virtue of his individuality, but rather as a member of a great human society, which directs his material and spiritual existence from the cradle to the grave” (p. 10). Thus, the life lived entirely for the other is deemed especially worthy. In fact, Einstein holds in the highest regard those of generous spirit, who contribute to society via the arts or the sciences with the intent to enhance or ameliorate the lives of its members: “The true value of a human being is determined primarily by the measure and the sense in which he has attained to liberation from the self” (p. 10).

Einstein no doubt uses the expression “liberation from the self” to denote “concern for others,” as opposed to “discounting of the self.” Taking the sentence at face value, however, I wonder: is complete liberation from the self truly necessary for us to fulfill our true worth? Many religions postulate that the self is inherently inclined toward evil, and that this inborn tendency tempts us to dabble in sin; likewise, contemporary psychology postulates that inner flaws cause unhealthy behaviors. To ensure liberation from this broken self, religion encourages relinquishing oneself to a higher power, whereas psychology prescribes, ironically enough, a hefty dose of therapist-assisted self-absorption. As we shall see, it turns out that both religion and psychology are wrong, in that self-related shortcomings do not necessarily have anything to do with behavior. In short, there is nothing in the self to actually liberate ourselves from!

The idea of the self as innately inadequate, and therefore of self-fulfillment as release from the self, can be traced back in history to the Old and New Testaments. According to the Good Book as interpreted by former evangelical preacher Dan Barker (2008), Man is inherently Evil (Psalm 14:3, Psalm 51:5, Romans 3:10, Romans 3:23). He can, however, shed his deep-seated inadequacy by submitting his entire person to that of Jesus Christ (Acts 5:31). Christ, always the diplomat, said of Man that, while he is still capable of Good, he remains nonetheless intrinsically Bad (Matthew 7:11). Ironically, Christ’s own Father had Himself an affinity for the Dark Side (Isaiah 45:7, Jeremiah 18:11, Lamentations 3:38, Ezekiel 20:25, 26), sometimes preferring to live in Darkness (II Samuel 22:12, I Kings 8:12, Psalm 18:11, Psalm 97:1-2).

According to psychiatrist Thomas Szasz (1988), the concepts of sin and inborn evil represent initial attempts at making sense of undesirable behavior. The postulate that we behave badly because we are inherently bad, however, appears to be misguided. As psychological researcher Robyn Dawes (1996) explains: “The assumption that behavior we dislike or condemn is due to internal problems is religious” and “not established by empirical science” (p. 282). Nevertheless, this religious assumption has found new life amongst many of today’s psychological theories about the self. Specifically, we believe that personal shortcomings cause unhealthy behavior. To rectify this, the “vile” self must be “purified” in therapy, substituting psychological weaknesses with psychological strengths, thereby bringing about healthy behavior.

While Christianity locates absolution outside of the self in the person of Jesus Christ, psychology locates absolution within the person him/herself, conceptualizing the self as not only the source of negative behavior, but of positive behavior as well. Dawes (1996) appropriately dubs such deification of the self “egoistic individualism.” As he warns, however, “[professional] psychology’s harping on the self—and in particular on how the self feels about the self—as the focus of all desirable or undesirable behavior” (p. 282) is empirically unwarranted. In his revelatory book’s empowering conclusion, Dawes reaffirms what contemporary psychology insist on hiding from us:

“It is simply not true that optimism and a belief in one’s own competence and prospects for success are necessary conditions for behaving competently. Good feelings may help, but they are not necessary. Moreover, we do not need to believe that in general we are superior, we are invulnerable, and the world is just. […] It is not true that we are slaves to our feelings or to our childhood experiences [.] More importantly, we do not have to feel wonderful about ourselves and the world in order to engage in behavior that is personally or socially beneficial” (p. 293; italics in original).

In short, both religion and psychology have failed to recognize the basic truth that inner perfections (Goodness or mental health) no more determine outer successes (saintly or healthy behavior) than inner imperfections (Evil or mental un-health) determine outer failures (sinful or unhealthy behavior). Dawes also happens to resent the idea that attaining happiness is life’s ultimate goal. He quotes poet Yevgeny Yevtushenko, who counseled his readership to reject “the vulgar, insultingly patronizing fairy tale that has been hammered into your heads since childhood that the main meaning of life is to be happy” (1996, p. 277). Einstein himself echoes this sentiment when he confesses: “I have never looked upon ease and happiness as ends in themselves [.] The ideals which have lighted me on my way and time after time given me courage to face life cheerfully, have been Truth, Goodness, and Beauty” (p. 4).

On Goodness and Humanity

Einstein claims that the ethical life is “based effectually on sympathy, education, and social ties” (p. 30). “No religious basis is necessary” (p. 30), he continues, because “there is nothing divine about morality,” it being a “purely human affair” (p. 31). In the animated musical The Prince of Egypt (1998), Moses’ father-in-law Jethro, in apparent disagreement with this, urges him to look at his life “through Heaven’s eyes” (Schwartz, track 7). Jethro’s advice, however well-intentioned, may in reality be unsound, for philosophers of the non-theistic persuasion have provided ample support for the assumption that morality is in fact, as Einstein put it, a “purely human affair.”

According to Barker (2008), there is no evidence that the Higher Power purported to exist by many religions is the source of all Good, and therefore the benchmark against which to measure all values. In nature, God’s own creation, the higher the life form, the more capable of destruction it is. In fact, there is nothing to say that God, the highest power, could not create the most heinous crime. The Old Testament, after all, is laden with offences committed by God: mass-murder (e.g., the flooding of Earth, the razing of Sodom, the eradication of Egypt’s firstborns, the slaying of the 42 youths near Bethel), endorsement (Leviticus 27:28-29) and acceptance of human sacrifice (Judges 11:30-39, II Samuel 21:8-14), endorsement (Exodus 21) and practice of slavery (Judges 3:8, 3:14, 4:2-3, 6:1, 13:1), sexual molestation (Isaiah 3:17), emotional blackmail (Leviticus 26:14-38), hateful speech (Leviticus 21:18-23, any verse on homosexuality), the list goes on…

Yes, as my boyfriend reminds me, there is much debate concerning the latter verses, and there are countless examples of God behaving exemplarily. But if God cannot Himself act ethically in any clear way, He cannot possibly expect our own behaviors to be unmistakably black or white either. Despite this, He is known for making snap judgments regarding those He feels have offended Him. Without ever awarding a fair trial, He imposes sanctions (e.g., exile, death, misery) sometimes spanning generations upon individuals every bit as real (supposedly) and complex as you and me. Many believers regrettably tend to dehumanize these ancient victims as undeserving sinners with no hope of rehabilitation. Yet, it is crucial to our present Humanity that we do not forget theirs. We can therefore add a staunch dislike for due process to God’s list of offences.

While God is not necessarily all Evil, it appears He is not necessarily all Good either. And if He is, there is no way to tell for sure. As such, it would be unwise to rely wholeheartedly on Him, especially when trying to figure out how to live the Good Life. Besides, as Barker (2008) asks: why should we trust God to know how to overcome life’s challenges here within Time and Space? Indeed, has He ever truly experienced genuine distress, be it lack of anything or punishment of any kind? The experience of being downtrodden is especially inaccessible, since all-powerful deities do not—by definition, cannot—answer to anyone. Christians will mention that Jesus came to Earth to give His Father a taste of what it is truly like to be human, but one can hardly deny that the suffering experienced by Jesus pales in comparison to that experienced both voluntarily and involuntarily by the whole of Humanity, including, most notably, the kind of pains only those not of Jesus’ own gender can experience. Moreover, knowing that one is God’s progeny and that one will be reunited with His parent once the torment ends effectively takes away from the genuine experience of suffering.

It is for these very reasons, Barker (2008) concludes, that morality should be conceived in natural terms; that is, according to humans and their experience of the natural world. Besides, values are subjective products of the human mind; and so, only that which the mind processes can be used to decipher these values. Thus, not only is morality a human affair, as Einstein stated, but it also should be.

What does the mind process? The external and internal natural world. The external world consists of nature itself, with its own set of laws (e.g., of motion and gravity) that can potentially affect core human needs and functioning, whereas the internal world consists of human needs and functioning themselves. Moral behavior entails avoidance (minimization of harm) and approach (maximization of quality of life) needs. Thus, scouring the external world, we can induce the values: because human bodies cannot withstand large objects crushing into them, it is unethical to remove stop signs from an intersection, or b) because human bodies cannot fall back up, it is ethical to grab someone accidentally teetering over the edge of a cliff. Likewise, scouring the internal world, we can induce the values: a) because humans need food to survive, it is unethical to intentionally starve another, or b) because humans function together as members of a worldwide family, it is ethical to volunteer our time in a local homeless shelter.

Reality being complex, inducing values from it can easily become a challenge. Still, we can always rely on sound empirical study (both individual and professional), driven by reason, compassion and determination, to help us a) locate the different varieties of harm that can afflict humanity, b) understand how to best avoid them, and c) implement ways of remedying them when they do impose themselves upon us.

(Note: It was mentioned previously that the meaning of life is not grounded in a quest to be perfectly happy. And so, it is important to mention that while morality seeks to maximize happiness while minimizing harm, and while life should generally be lived morally, there remains nevertheless more to life than maximizing happiness: for example, experiencing Love, creating Art, generating Knowledge, etc. To enact these for their own sake, as ends in themselves, often requires painful amounts of tireless dedication. As in the case of necessary evils, even doing Good sometimes means doing Bad. And so, happiness and misery are equal parts of life’s fabric and purpose.)

On Science and Specialization

Exploring the culture of Science, Einstein recognizes a growing tendency among its members that I too have come to find particularly disconcerting: more and more, scientists voluntarily relegate themselves to “an ever-narrowing sphere of knowledge, which is threatening to deprive the investigator of his broad horizon and degrade him to the level of a mechanic” (p. 17).

I cannot comment on other fields, but I can testify to the truth of this admonishment in relation to experimental and (despite it not being a science) clinical psychology as well. In the case of the former, our researchers have regrettably become increasingly specialized. Gone are the days when authors meticulously explored particular subjects and phenomena by gathering not only knowledge from their home field, but by integrating knowledge from various sibling fields as well. In the case of the latter, we are trained to administer particular so-called “treatments.” Yet, query any intern or licensed practitioner regarding either the philosophical underpinnings or historical antecedents of their favored approach, and not only will they likely not know, but they will also question the very relevance of knowing.

Exploring the notion of Zeitgeist, Einstein recommends that each of us (scientists and laymen alike) “do his little bit towards transforming the spirit of the times” (p. 8). In the case of professional helpers and their clients, I believe it is our ethical duty to oppose psychiatry’s current stronghold over psychology in defense of secular humanistic conceptualizations of human behavior and relationships (specifically, the sorts of relationships that cause or alleviate distress).

On the Jewish People, Judaism and Israel

Einstein idealizes the Jewish People and Judaism, investing many hopes in the prospect of a Jewish State. According to him, “the pursuit of knowledge for its own sake, an almost fanatical love of justice, and the desire for personal independence” (p. 103) are the mark of the Jew. Einstein even goes so far as to liken rejecting one’s Jewish Heritage to a disorder with its very own set of causes (see p. 119–121). Seemingly unimpressed with religion, Einstein also rather incredibly seeks to redefine the meaning of “serving God” in the Jewish tradition. In other religions, he explains, serving God entails fear-based submission; in contrast, Judaism defines serving God as “serving the living” (p. 104), a truly humanistic endeavor that is free of self-loathing, and, most importantly, that is more in line with Einstein’s own values and beliefs regarding life and how one should spend it.

Einstein expresses the following hopes for the Jewish People: “We Jews should once more become conscious of our existence as a nationality and regain the self-respect that is necessary to a healthy existence. We must learn to glory in our ancestors, as a nation, cultural tasks of a sort calculated to strengthen our sense of community” (p. 114). To Einstein, a common goal, a Home in Palestine, was necessary for this to come to fruition. In fact, within its settlers resides “the most valuable sort of human life” (p. 118).

While Einstein’s impassioned views regarding everything Jewish are not noteworthy in and of themselves, they do nonetheless stand at odds with his views regarding the subject of nationalism. Einstein generally condemns nationalism: “The greatest obstacle to the international order is that monstrously exaggerated spirit of nationalism which also goes by the fair-sounding but misused name of patriotism. During the last century and a half this idol has acquired an uncanny and exceedingly pernicious power everywhere” (p. 65). Further, only when Man overcomes “national and class egotism” will “he contribute towards improving the lot of humanity” (p. 87). Yet, despite all this, Einstein chooses to praise Jewish nationalism.

How did Einstein come to forego his own anti-nationalistic principles? Perhaps, his (rather selective) change of heart was a reaction to the persecution of his own ethnic group. I am reminded of philosopher Karl Popper’s observations regarding the post-World War II Jewish people: “Admittedly, it is understandable that people who were despised for their racial origin should react by saying that they were proud of it. But racial pride is not only stupid but wrong, even if provoked by racial hatred. All nationalism or racialism is evil, and Jewish nationalism is no exception” (1976/2002, p. 120).

Einstein retorts that while “there is something in the accusation” (p. 122) that he is committing the offence of nationalism, “it is a nationalism whose aim is not power but dignity and health” (p. 123). Lending credence to Popper’s hypothesis, Einstein subsequently succumbs, revealing the reason behind his change of heart: “If we did not have to live among intolerant, narrow-minded, and violent people, I should be the first to throw over all nationalism in favor of universal humanity” (p. 123). Thus, in the face of Germany’s declining treatment of the Jews, it appears Einstein struggled to reconcile his disdain of national pride with his hopes for his People. Redefining the very term “nationalism” somewhat solved the cognitive dissonance that naturally ensued.

I have previously admitted that, even as a Jew, I am more than slightly perplexed by Jewish nationalism, not to mention any other variety of racial pride. When asked whether the late-life revelation that he was Jewish caused him to reconsider his antireligious stance, Christopher Hitchens revealed: “My attitude toward Zionism has always been […] that I very much doubt it to be the liberation of the Jewish people” (2011, p. 62). Thus, while nationalism may arise in response to prejudice, it will not necessarily free us from it. I have also expressed that, in an age of ever-expanding globalization and transcended borders, and in light of the escalating necessity for humans to become better global citizens, we should feel proud not of our national heritage, but, more importantly, of our cross-cultural one. Paradoxically, Einstein himself echoed this when he observed: “The world is to-day more than ever in need of international thinking and feeling by its leading nations and personalities, if it is to progress towards a more worthy future” (p. 36).

Final Thoughts

Suffice it to say, The World as I See It made me think. A lot. And this review only focused on a brief sample of the many topics addressed throughout. However rich in content The World as I See It is, it does not lack entertainment value. Take, for example, Einstein’s scathing indictment of soldiers: “That a man can take pleasure in formation to the strains of a band is enough to make me despise him. He has only been given his big brain by mistake; a backbone was all he needed” (p. 6). Or, his humorously trenchant description of government: “Bureaucracy is the death of all sound work” (p. 85). In the end, it is incredibly humbling to ponder important subjects along with Einstein. That the book also provides a humanizing peek into the mind of one of the 20th century’s most productive personalities makes it all the more unique.


Barker, D. (2008). Godless: How an evangelical preacher became one of America’s leading atheists. Berkeley, CA: Ulysses Press.

Dawes, R. M. (1996). House of cards: Psychology and psychotherapy built on myth. New York, NY: The Free Press.

Einstein, A. (1940). Science and Religion. Nature, 146, 605–607.

Greenberger, D., & Padesky, C. A. (1995). Mind over mood: Change how you feel by changing the way you think. New York, NY: The Guilford Press.

Hitchens, C. (2011). Christopher Hitchens in conversation with Noah Richler. In R. Griffiths (Ed.), Hitchens vs. Blair: Be it resolved religion is a force for Good in the world (The Munk Debates) (pp. 55–66). Toronto, ON: House of Anansi Press.

Popper, K. (2002). Unended quest: An intellectual autobiography. New York, NY: Routledge. (Original work published 1976)

Szasz, T. S. (1988). The myth of psychotherapy: Mental healing as religion, rhetoric, and repression. Syracuse, NY: Syracuse University Press.

%d bloggers like this: