In terms of psychology, the holistic view suggests that it is important to view the mind as a unit, rather than trying to break it down into its individual parts. Each individual part plays its own important role, but it also works within an integrated system. Essentially, holism suggests that people are more than simply the sum of their parts.
In order to understand how people think, the holistic perspective stresses that you need to do more than simply focus on how each individual component functions in isolation. Instead, psychologists who take this approach believe that it is more important to look at how all the parts work together. As an approach to understanding systems, holism is used in psychology as well as in other areas including medicine, philosophy, ecology, and economics. The field of holistic medicine, for example, focuses on treating all aspects of a person's health including physical symptoms, psychological factors, and societal influences.
In order to understand why people do the things they do and think the way they think, holism proposes that it is necessary to look at the entire person. Rather than focus on just one aspect of the problem, it is necessary to recognize that various factors interact and influence each other.
One reason why it is so important to consider the entire being is that the whole may possess emergent properties. These are qualities or characteristics that are present in the whole but cannot be observed by looking at the individual pieces.
Consider the human brain, for example. The brain contains millions of neurons , but just looking at each individual neuron will not tell you what the brain can do. It is only by looking at the brain holistically, by looking at how all the pieces work together, that you can see how messages are transmitted, how memories are stored, and how decisions are made.
Even looking at other aspects of the brain such as the individual structures does not really tell the whole story. It is only when taking a more holistic approach that we are truly able to appreciate how all the pieces work together. In fact, one of the earliest debates in the field of neurology centered on whether the brain was homogeneous and could not be broken down further holism or whether certain functions were localized in specific cortical areas reductionism.
Today, researchers recognize that certain parts of the brain act in specific ways, but these individual parts interact and work together to create and influence different functions. When looking at questions in psychology, researchers might take a holistic approach by considering how different factors work together and interact to influence the entire person.
At the broadest level, holism would look at every single influence that might impact behavior or functioning. A humanistic psychologist , for example, might consider an individual's environment including where they live and work , their social connections including friends, family, and co-workers , their background including childhood experiences and educational level , and physical health including current wellness and stress levels.
The goal of this level of analysis is to be able to not only consider how each of these variables might impact overall well-being but to also see how these factors interact and influence one another. In other cases, holism might be a bit more focused. Social psychologists, for example, strive to understand how and why groups behave as they do. Sometimes groups react differently than individuals do, so looking at group behavior more holistically allows research to assess emergent properties that might be present.
Just like the reductionist approach to psychology, holism has both advantages and disadvantages. For example, holism can be helpful at times when looking at the big picture allows the psychologist to see things they might have otherwise missed. In other cases, however, focusing on the whole might cause them to overlook some of the finer details. Some of the key benefits of this perspective include:. One of the big advantages of the holistic approach is that it allows researchers to assess multiple factors that might contribute to a psychological problem.
Rather than simply focusing on one small part of an issue, researchers can instead look at all of the elements that may play a role. This approach can ultimately help them find solutions that address all of the contributing internal and external factors that might be influencing the health of an individual. This is sometimes more effective than addressing smaller components individually.
What this means is that the standard metaphysical position is that there are no true emergent phenomena, only phenomena that cannot currently or even ever be described or understood in terms of fundamental physics, and yet are, in fact, only complex manifestations of the microscopic world as understood by fundamental physics.
A simple way to make sense of this idea is to deploy the concept of supervenience: in philosophy a property A is supervenient on another one, B, just in case A cannot change unless there is a change in B. Analogously, higher-order phenomena in physics or biology supervene on micro-physical phenomena just in case the only way to change the former is to change the latter i. I will not comment much further on the issue of ontological emergence versus reductionism because it is of hardly any concern to the practising biologist.
It simply cannot be done. But if our epistemology tells us that the universe behaves as if it contained genuine emergent properties say, the properties of economic systems, which do not seem to have much to do with the properties of quarks , then is it not the case that rejection of ontological emergence is a flagrant violation of the principle that epistemology should inform metaphysics?
All in all, I think the most reasonable course of action is actually to take a neutral, agnostic, stance on the matter and to proceed to where we are going next: epistemological emergence. O'Connor helpfully describes two types of the latter, which he labels predictive and irreducible-pattern. Predictive emergence is the idea that in practice it is not possible to predict the features of a complex system in terms of its constituent parts, even if one were to know all the laws governing the behavior of said parts.
Irreducible-pattern emergentists maintain that the problem is conceptual in nature, i. As O'Connor himself acknowledges, the distinction between predictive and irreducible-pattern views of epistemic emergence is not sharp, but it does draw attention to the fact that emergent phenomena present both pragmatic and conceptual issues for the practising scientist and aspiring reductionist.
It is not just, for instance, that it would be too computationally cumbersome to develop a quantum mechanical theory of economics the predictive issue , it is that one would not know where to start with the task of deploying the tools of quantum mechanics indeterminacy principle, non-locality, etc. So, again, one does not need to be an ontological emergentist to firmly reject a greedy reductionist programme in biology or the social sciences.
It will be instructive to anchor the somewhat esoteric discussion we have engaged in so far with a couple of examples from the actual biological literature, to focus our ideas about what emergence may sensibly mean in the context of biological research. Stuart Kauffman proposed these models — which are a type of cellular automaton — as an early attempt at exploring the properties of genetic networks characterized, specifically, by N elements each with K input connections and one output. Robustness measures the tendency of genetic networks to withstand internal disruptions e.
Interestingly, these restrictions are within the empirical NK ranges that are derived from studies of organisms as disparate as yeast and our own species. Here, then, emergence is the appearance of a biological property robustness as a result of a particular type of non-linear interaction among lower-level entities the genes in the network. But, again, this is an open question. The debate about nature and nurture has been going on at least since Plato's idea of learning as recollection in the Phaedo, and later John Locke's opposite contention that the human mind is a tabula rasa on which experience writes out our character.
In modern times, similar discussions have pitted social scientists who are inclined toward a Lockean position think of B. A better tool for thinking about gene—environment interactions has been available since the beginning of the 20th century in the form of the idea of a norm of reaction: a genotypic- and environment-specific function that displays the range of phenotypes produced by a given genotype within a given set of environments. As Lewontin b elegantly showed in reference to the specific case of the heritability of human IQ, grasping the concept of a reaction norm allows one to understand seemingly paradoxical ideas such as, for example, that a change in environmental variance may affect estimates of heritability as it has been empirically demonstrated several times since: Pigliucci, : ch.
The phrase can be given at least two distinct interpretations, one statistical and pretty straightforward, the other one a bit more vague but particularly relevant to evolutionary developmental biology. Consider the statistical meaning first. In a typical reaction norm diagram one can disentangle the average effect of the environment on a given trait — measured by the mean slope of the measured reaction norms — from the average effect of genotype, measured by the mean height of the reaction norms sampled.
These are both additive effects, respectively quantifiable by the so-called Environmental and Genetic E, G variances in a standard analysis of variance. In many cases, however, the individual i. This so-called G-by-E interaction variance is the result of statistically non-additive effects that cannot simply be reduced to a sum of genetic and environmental effects.
There is a less straightforward, but more interesting, sense in which G-by-E represents a case of emergence in biology. As again Lewontin pointed out, if we think in terms of genetic and environmental effects as distinct causes shaping phenotypes in a more or less additive-linear fashion, we put ourselves in the naive position of trying to understand how a house is built by simply weighing the total amount of bricks and lime that goes into it.
Clearly, the key to building the house lies in the specific alternating pattern in which bricks and lime interact to yield the final construction. Similarly, genes and environments continuously interact to build phenotypes throughout the process we call development. And this is a major reason why one simply cannot understand evolution without development and vice versa , an idea that has lurked around for many decades before finally flourishing into a distinct field of evo-devo studies Love, Of course, one thing is to appreciate Lewontin's house-building metaphor, another one is to cash out on the promise of evo-devo in order to understand the emergence of phenotypes in biological organisms.
Regardless, the point remains that this — as well as the previous case of robustness — seems to represent a genuine case of emergence, at least at the epistemic level as I mentioned above, ontological emergence is a metaphysical notion that is likely not to be settled empirically, and about which the best course of action is to maintain philosophical neutrality.
A good number of scientists are understandably wary of the notion of emergence, for the simple reason that it sounds a bit too mystical and wool-eyed. Of course, if emergence turns out to be an ontological reality, then these scientists would simply be mistaken and would have to accept a new metaphysics.
However, even if emergence is only an epistemic phenomenon, there are good reasons to take it seriously, for instance because it points toward current methodological or theoretical deficiencies that make straightforward reductionist accounts unfeasible in practice, if not in principle. Still, in order for more scientists to take emergence seriously we need a coherent account of why we see emergent phenomena to begin with.
One such account has been provided recently by Brian Johnson , and it is worth considering briefly. I am not suggesting that Johnson is necessarily correct, or that his explanation is the only one on the table.
But it represents a good example of the contribution that philosophy of science in this case, actually done by a scientist can give to the way in which scientists themselves think of a given issue.
Besides, Johnson may very well turn out to be exactly right. Johnson's basic idea is simple: at least some kinds of emergent properties are the result of a large number of interactions among parts of a complex system, all going on simultaneously in time and space. As the human brain is not capable of parallel conscious processing of information, we are faced with the impossibility of reasoning our way through the mechanics of emergence.
How do we know that the human brain cannot do parallel processing consciously? There are several reasons to think so, but Johnson provides a simple little exercise in figure 1 of his paper Johnson, which is worth trying out to see how difficult that sort of thinking actually is, and how unsuitable we are at carrying it out.
The exercise involves summing up numbers, first on a single row — which is easy to do — then on multiple rows, which becomes immediately overwhelming. Interestingly, Johnson's example of an emergent property that is not mysterious, and yet that we cannot cognitively deal with, is cellular automata for a similar take, also using cellular automata, see various works by Mark Bedau, e.
Bedau, Johnson's figure 2 presents a standard cellular automaton, and argues that we cannot predict the behaviour of the cells in the game because our brains cannot process in parallel the various simple rules that generate such behaviour. There is no magic here, as we designed the rules and we can check — time instant by time instant — that the behaviour of the automaton is, in fact, the result of the application of such rules.
Analogously, there may be no mystery in, say, the emergence of robustness from the interactions going on in genetic networks, or the emergence of phenotypes during development save, of course, for the possibility that some of these behaviours may be ontologically, not just epistemically, emergent. If Johnson is correct, then emergence is a necessary concept to deploy across scientific disciplines for eminently practical reasons, any time that there is a mismatch in degree of complexity and interactivity between the way the world that we try to comprehend actually is and the capacities of the brains with which we try to comprehend it work.
What does the word partial mean? How can you distinguish between partial and holistic thinking? What do you think is the value of doing philosophy in obtaining a broad perspective in life?
Previous Article What was missing at the first Thanksgiving? Next Article Why did Leonardo da Vinci become an artist? Back To Top. Confirmation holism was a theory put forth by the American philosopher Quine. It states that no individual statement is subject to an empirical test, but only to a set of statements whole theory. Functionalism in philosophy is based on holism. Semantic holism states that words can be understood semantically only by referring to the language they belong to.
Similarly, humanistic and cognitive psychologists also follow a holism approach. Cognitive psychologists believe that the network of neurons in our brain which are formed and destroyed by environmental experiences acts differently as a whole than as individual components. Otherwise, it makes no sense to try to understand the meaning of anything that anybody might do. Reductionism is when complex behaviors are separated into simpler components; in contrast, the holism approach looks at it as a whole.
0コメント