Monday, August 1, 2011

Biomedical Engineer Rated As Best Career in 2011 by U.S. News & World Report

Recession is officially over, but where, exactly, are the jobs? Which occupations offer decent salaries, quality of life—and are likely to stick around for the next decade? Have you thought about this ? You can kill some time on web for it. Well, for the record, Biomedical Engineers top the list of best careers. This article apprises the future of Biomedical Engineers in coming years.
According to U.S. News & World Report in 50 Best Careers , Biomedical Engineers top the technology and science category this year.

Why a massive growth in Biomedical Engineering sector ?

Yups ! there is an enormous demand for better medical devices and equipment designed by biomedical engineers, due to aging of population and burgeoning health issues. The crux of the continuous demand of Biomedical engineers include need for highly advanced medical equipments, procedures, compatibility, reduced costs, easy servicing and maintenance in hospitals and clinics. This list actually doesn’t stop, a 21 per cent growth is projected for biomedical engineers, with an estimated 3,000 new careers created in the industry through 2016.

The Rundown:


  • Kudos to advancement in medical processes, devices, and equipment. Have you imagined medical care without asthma inhalers, artificial hearts, MRIs, or prosthetic limbs? Biomedical engineers have helped to develop the equipment & devices that improve or enable the preservation of health.


  • They apply their knowledge of engineering—particularly mechanical or electronic—to areas such as imaging, drug delivery, or biomaterials.Some biomedical engineers might spend their time working on devices & procedures related to rehabilitation or to orthopedics.

  • The Outlook:


  • No single occupation is expected to have more job growth over the next decade or so.

  • Employment of biomedical engineers is expected to grow by a whopping 72% —adding nearly 12,000 jobs—between 2008 and 2018, according to the Labor Department.

  • Money:


  • Median annual wages for biomedical engineers were $78,860 in 2009, the Labor Department reports.

  • The highest-paid 10 percent make more than $123,000, while the lowest-paid 10 percent make less than $50,000.

  • Upward Mobility:


  • Biomedical engineers may advance to more complex research and development projects.

  • They may later move up to supervisory positions.

  • Activity Level:


  • Average. You may not be constantly moving, but it’s not a standard desk job.

  • Stress Level:


  • Average. You’ll face deadlines and pressure, but much of your work is self-directed, and schedules tend to be pretty routine.

  • Education & Preparation:


  • Some biomedical engineers have undergraduate degrees in mechanical or electronics engineering, while newer students may pursue biomedical degrees even at the undergraduate level.

  • For research and development work, you’ll generally need a graduate degree.

  • Real advice from real people about landing a job as a biomedical engineer:

    “Since this is such a new field, it’s constantly being redefined. Some graduates will head into traditional fields like pharmaceuticals, but he’s seeing an increasing number move into fields such as strategic consulting, law, and even finance. Don’t limit yourself when pursuing jobs in this field. Find out your real passion first, there’s so biomedical engineers can do.” says Philip Leduc, associate professor at Carnegie Mellon University and member of the Biomedical Engineering Society board of directors.
    Hope, you have enjoyed this article. If you are a biomedical engineering student or a professional, please spread this good news among your peers. Also share your views and suggestions in the comments section below.

    Biomedical Research Needs a Paradigm Shift

    Without a major shift in how the medical establishment defines the nature of reality and its relation to human physiology, we are nearing the limit of what we can accomplish in biomedical research and its treatment of disease. The best example of this is cancer. In 1984, the National Cancer Institute (NCI) launched a program it claimed would halve the 1980 cancer mortality rate by the year 2000. Despite the fact that the NCI budget increased twentyfold between 1971 and 2003 (to about $4.6 billion), progress with cancer mortality rates during that period was modest. A review of the rates between 1980 and 2000 reveals that incidence rates in women for all cancers combined actually increased.1 For men there was more fluctuation: first an increase, then a decrease, then a stabilization.2 The overall net result, however, is that between 1990 and 2000 cancer mortality rates decreased in numbers ranging only from .08 percent a year to 1.8 percent a year.3 Given the major public health impact of cancer and the huge expenditures of research dollars, this disappointing result should raise some serious questions.
    I believe the problem lies in two fundamental principles currently dominating biomedical research, both of which are based on outdated assumptions. The first is reductionism, the belief that complex diseases can be understood by dissecting them into their individual subcomponents (for example, genes, receptors, transcription factors, and signaling pathways). The second is materialism, the belief that matter is the primary cause of all physiological functioning, diseases, cognitions, and thought processes.
    Reductionism – The Parts Are Greater Than the Whole
    The greatest problem with reductionism is that it ignores the concept of emergent properties, the characteristics that emerge from the synergistic interactions of multiple units (such as genes and cells) that are not present in the individual components. An illustration of an emergent property is temperature.4 The temperature of a liquid, solid, or gas is defined as the average velocity (rapidity of motion) of all the atoms or molecules of the substance being measured. Knowing the velocity of a single atom, for example, gives no information about the temperature of the total substance, because increasing the velocity of one or even several atoms may not cause any temperature change since other atoms may simultaneously decrease in velocity. In fact, this is the case when temperature remains constant. The emergent characteristics of the collective particles (temperature resulting from movement) cannot be predicted or understood even by knowing everything there is to know about, for example, water’s individual atoms of hydrogen and oxygen.
    In human physiology, organ systems are examples of emergent properties. Knowing everything there is to know about a cell in the heart (for example, a muscle cell) does not provide enough information to predict the function of the heart, nor does knowing the sequence of DNA base pairs that make up a single amino acid provide enough information to predict the characteristics of a transcription protein composed of many amino acids. In other words, the whole is more than the sum of its parts. The same principle applies to complex diseases, which evolve from the interaction of multiple genes and environmental factors occurring at single points in time and in other ways during the progression of the disease. What is more, we know that gene function varies and that one gene can have multiple functions based on environmental triggers that turn it “off” or “on” and regulate the extent of its activity. One of the consequences of these complex interactions is that gene expression varies greatly based on the cellular environment.5 If the gene isn’t consistently functioning the same way, it will not always be associated with the same phenotype or disease outcome, even if its function is important for that particular disease. This is one of the reasons that genome-wide association studies that investigate single genes are so difficult to replicate.
    Thus, while investigating individual genes or signaling pathways is important, it doesn’t provide a complete picture of disease causality unless used in conjunction with research on multiple interacting systems – with cancer, this would mean those that fight against cancer by killing or repairing mutated cells. When experiments on single pathways are interpreted alone, they can lead to inaccurate conclusions. The analogy of blind men experiencing an elephant for the first time is appropriate here. The man feeling the tail says that elephants are like snakes, skinny and flexible. The man feeling the leg says that elephants are big and round like trees. The man feeling the tusk says that both of these descriptions are wrong because elephants are hard, smooth, and curved. Needless to say, all are partially correct, but none is even close to describing the essence of the elephant. Similarly, a detailed description of the well-known breast cancer genes BRCA1 and BRCA2 does not provide nearly enough information to understand the etiology (cause) and progression of breast cancer. Many women who have these genes don’t get cancer, and many women who get breast cancer don’t have these genes.
    Our beliefs about etiology are important because they influence how we design experiments and which variables we choose to investigate. If we believe that cancer can be reduced to a single gene mutation or signaling pathway, for example, the study design will reflect that, and we may ignore relevant factors that don’t fit this concept. A receptor in a chemical preparation outside the body will react differently to a stimulus than the same receptor in its natural environment in a living system, because the receptor outside the body is in an unchanging chemical environment. Genes and receptors in living systems are constrained or stimulated by a surrounding environment (for example, transmitter substances, immune parameters, hormones, etc.) that changes dynamically to meet the body’s needs. Genes can be functionally turned off and on by ambient factors that are influenced by our environment, such as nutrients, stress hormones, toxicants, and so on. In addition, many genes have multiple functions that adapt to meet the demands of the surrounding environment. Most animal models used in cancer research are genetically modified because it is difficult to get a tumor to grow in an “intact” animal. The reason is that the body’s natural defenses against tumor growth (such as DNA repair, programmed cell death, and immune function) respond by destroying the tumor cells. So the way that a study is designed plays a key role in determining the results as well as their generalizability to real-life situations.
    Fortunately, a number of medical scientists have begun to grasp this fact, and they are beginning to use approaches that have been collectively termed “systems biology” to try to investigate the simultaneous interaction of multiple complex systems. Though still in its infancy, systems biology accompanied by new mathematical approaches has great potential for improving our understanding of disease causality.
    Materialism – Matter Is Primary
    Unfortunately, the same progress has not been made with the second faulty assumption – the belief that matter is the primary cause of all physiological functioning, diseases, cognitions, and thought processes – which currently dominates and limits biomedical research. Because this assumption has not yet been recognized, no attempts have been made to resolve it. The consequence is a fundamental divergence between the ways modern physics and modern medicine define the nature of reality, which hinges upon the fact that biomedical research has not incorporated the implications of quantum theory into its hypotheses. Biomedical research still focuses solely on particulate matter, refusing to investigate or acknowledge the functional importance of the intrinsic energetic aspects of living systems. In essence, medicine is only examining one side of Einstein’s equation. Without the inclusion of both sides, we will continue to be like the blind men describing the elephant, describing bits and pieces without understanding the fundamental essence of humanness or complex diseases such as cancer.
    Einstein long ago showed us that matter and energy are interchangeable. This means that energy can be created from matter and that matter can be created from energy. We can all think of examples where matter is converted to energy, such as a fire that converts wood to heat energy. But Einstein’s equation, E=MC2, means that energy can also be converted to matter. This was verified as early as the 1950s, when it was demonstrated in a cyclotron that matter could be created from fast-spinning energy.
    Quantum theory went a step further by showing that at a subatomic level, distinctions between matter and energy blur. One of the most well-known quantum physical experiments showed that whether light consisted of particles or waves (matter or energy) depended solely on how the experiment was set up. This completely contradicted the world of Newtonian physics, which defined reality as an objective state totally independent of the observer. The problem that quantum theoretical experiments present is that at a subatomic level, the little building blocks of matter disappear so that energy and matter become two aspects of one and the same reality. This is not some abstract theory but has been experimentally verified over and over again and is the basis of many of the electronic components in use today.
    Quantum theoretical experiments have also shown that whether light appears as particles or waves depends on how the measurements are made. Despite the fervent wishes of some of the discoverers of quantum mechanics, this is not a function of measurement error but a statement about the nature of reality – it cannot be reduced to little building blocks of matter. We influence the outcome of our experiments by the way we measure, which has enormous implications for biomedical research.
    The fact that matter and energy are interchangeable at a subatomic level means it is difficult to say which of them is primary. What quantum theory leaves open is the distinct possibility that energy may actually be primary and matter secondary – that energy “congeals” (for lack of a better word) into matter at a lower vibrational level. Another interpretation is that matter and energy are always coexistent. Either way, both of these concepts contradict the fundamental assumption of biomedical research that all causality can be found in particulate matter. Materialism has been disproved but not discarded, and it is time to examine the consequences that this bias has for scientific inquiry.
    Consider the implications of common diagnostic methods. Hospitals regularly use ECGs (electrocardiograms) to assess heart function and to diagnose heart disease, as well as EEGs (electroencephalograms) to assess brain function and to diagnose diseases such as epilepsy. These instruments measure energy fields on the surface of the body, which are emanating from inside the body. But based on the belief that disease causality can only be found in matter, we assume these energies have no function – they can accurately diagnose function but are not causally related to it. This relegates them to the category of epiphenomena (secondary outcomes), although in some cases they are the most reliable measure of disease status that we have.
    This type of assumption is important because it influences what we choose to investigate and can inadvertently introduce bias. If we only investigate the elephant’s tail, we may conclude that elephants are skinny and move like a snake.  If we only investigate particulate matter, we will definitely find significant correlations, but will it help us understand causality and provide enough information for effective treatments? Physics is the essence of biochemical processes in the body because chemical bonds consist of electromagnetic or electrostatic attraction between atoms. Examples are a covalent bond, which occurs when two atoms share an electron, or the creation of an ion, when one atom takes an electron from another and creates an unbalanced charge in the electron-depleted atom. These bonds are energetic forces that are needed to create molecules, so energy is fundamental to chemistry. Since chemistry is the basis of all processes in the body, whether it is the creation of hormones, neurotransmitters, cell metabolism, or any other process, energy plays a very prominent role in function. What is not well known is how relevant energy is for genetics.
    DNA is packaged in the nucleus by being wrapped around a positively charged group of histone proteins, sort of like thread wrapped around a spool. What keeps the DNA in place is an attraction between the negatively charged DNA and the positively charged histone tails. This packaging not only helps the DNA fit inside the nucleus, it keeps the gene from being expressed until it is needed. In order to become functional, the DNA must be unwound from the spool so that factors that help to activate it can reach it. For this to happen, the electromagnetic charge between DNA and protein spool must be dissolved. In short, energetic forces are fundamental to every process in the body, starting with genes and moving to the level of organ systems. (See note #6 below to access a more detailed description of electromagnetic phenomena in biological systems.)
    Bringing Physics and Biology Together
    Despite the importance of energetic aspects to human physiology, we limit research on functional mechanisms and treatment modalities to particulate matter. This is not consistent with quantum theory and makes the primary theoretical framework of biomedicine more than ninety years out of date. As cancer statistics show, the true “essence of the elephant” is still eluding us. Cancer is currently the second leading cause of death in the United States, and it is expected to become the leading cause of death within the next decade.7 Estimated costs for cancer in 2010 totaled $263.8 billion, including $102.8 billion for direct medical costs, 20.9 billion for indirect costs of morbidity (for example, lost productivity from illness), and $140.1 billion for lost productivity due to premature death.8 Clearly, we have a long way to go in our understanding of this disease. After many years of increasing investment and diminishing returns, the question we must begin to ask ourselves is, Are we using the right paradigm? Given the experimentally verified validity of quantum mechanics, I believe it’s time to seriously consider the implications for biomedical research.
    So, what kind of research do we need? Consider the following example. Acupuncture as a treatment is more than two thousand years old and is regularly used in China to treat a multiplicity of diseases. It is based on the theory that there are energy meridians in the body and that when they are out of balance, susceptibility to disease increases. Because this paradigm does not fit Western medical concepts of disease causality, it has essentially been dismissed. The medical community has limited acupuncture’s potential usefulness to pain relief and therapy for nausea from chemotherapy. Such limited use contradicts research reported at a 1997 Consensus Development Conference on Acupuncture held at the National Institutes of Health, which covered basic studies on mechanisms of action from acupuncture treatments, including the release of opioids and other peptides in the central nervous system as well as changes in neuroendocrine function.9 Studies found acupuncture also influenced other physiological systems including substances that constrict and dilate blood vessels, those that stimulate or calm the nervous system, and those that affect reproductive and immune function.
    Close examination reveals that acupuncture needles are not inserted in places that would logically elicit these effects based on traditional anatomy. So what can explain them? It is interesting that many of the molecules in the body (water molecules, protein molecules, etc.) are dipoles. A dipole has both a positive and a negative charge, sort of like a magnet except the strength of the attraction isn’t constant. Theoretically, the structure of dipoles would allow them to align with other dipoles in “strings.” Is it possible that the acupuncture meridians are strings of dipoles held together by their electric charges? If that is the case, the structure of these meridians might be partially determined by the geometry of the body. If such meridians exist, the insertion of acupuncture needles might function to modulate their orientation by changing the vibration of atoms in nearby molecules, resulting in the propagation of an electromagnetic wave along the meridian.Since the chemical properties of atoms and molecules are determined by their electron configurations, which in turn determine the types of bonds they form with other molecules, a perturbation at a particular point in a meridian might be associated with a specific range of chemical changes (for example, neuroendocrine or immune changes).10
    The concept of longitudinal electric modes based on the dipolar properties of cell membranes was introduced by Frohlich in 196811. From a technical perspective, and according to his theory, components with electric dipole oscillations interact through nonlinear long-range Coulomb forces and thus establish a branch or branches of longitudinal electric modes in a frequency range of 1011-1012 sec-1. If the rate of energy supply to the relevant components is sufficiently large, it gets channeled into a single mode which then presents a strongly excited coherent longitudinal electric vibration whose wavelength depends on details of the geometrical arrangement of the components.
    This is only one example, and it illustrates that relevant and potentially fruitful avenues are not being pursued in biomedical research because of a dominant paradigm that is no longer valid. It is time for an open discussion in the biomedical community about the fundamental assumptions that influence its work, and quantum theorists should be part of that conversation.

    Predictive genetic testing for the identification of high-risk groups

    Background

    Genetic risk models could potentially be useful in identifying high-risk groups for the prevention of complex diseases. We investigated the performance of this risk stratification strategy by examining epidemiological parameters that impact the predictive ability of risk models.

    Methods

    We assessed sensitivity, specificity, positive and negative predictive value for all possible risk thresholds that can define high-risk groups and investigated how these measures depend on the frequency of disease in the population, the frequency of the high-risk group, and the discriminative accuracy of the risk model, as assessed by the area under the receiver-operating characteristic curve (AUC). In a simulation study, we modeled genetic risk scores of 50 genes with equal odds ratios and genotype frequencies, and varied the odds ratios and the disease frequency across scenarios. We also performed a simulation of age-related macular degeneration risk prediction based on published odds ratios and frequencies for six genetic risk variants.

    Results

    We show that when the frequency of the high-risk group was lower than the disease frequency, positive predictive value increased with the AUC but sensitivity remained low. When the frequency of the high-risk group was higher than the disease frequency, sensitivity was high but positive predictive value remained low. When both frequencies were equal, both positive predictive value and sensitivity increased with increasing AUC, but higher AUC was needed to maximize both measures.

    Conclusions

    The performance of risk stratification is strongly determined by the frequency of the high-risk group relative to the frequency of disease in the population. The identification of high-risk groups with appreciable combinations of sensitivity and positive predictive value requires higher AUC.

    Detection of lineage-specific evolutionary changes among primate species

    Background

    Comparison of the human genome with other primates offers the opportunity to detect evolutionary events that created the diverse phenotypes among the primate species. Because the primate genomes are highly similar to one another, methods developed for analysis of more divergent species do not always detect signs of evolutionary selection.

    Results

    We have developed a new method, called DivE, specifically designed to find regions that have evolved either more or less rapidly than expected, for any clade within a set of very closely related species. Unlike some previous methods, DivE does not rely on rates of synonymous and nonsynonymous substitution, which enables it to detect evolutionary events in noncoding regions. We demonstrate using simulated data that DivE compares favorably to alternative methods, and we then apply DivE to the ENCODE regions in 14 primate species. We identify thousands of regions in these primates, ranging from 50 to >10000 bp in length, that appear to have experienced either constrained or accelerated rates of evolution. In particular, we detected 4942 regions that have potentially undergone positive selection in one or more primate species. Most of these regions occur outside of protein-coding genes, although we identified 20 proteins that have experienced positive selection.

    Conclusions

    DivE provides an easy-to-use method to predict both positive and negative selection in noncoding DNA, that is particularly well-suited to detecting lineage-specific selection in large genomes.

    Background

    The genome of a living species is the product of a long series of changes, including neutral, beneficial, and detrimental alterations to the sequence. Sequence changes that affect the organism's fitness are subject to evolutionary pressures, such as the pressure to survive, to out-compete other species, and to defend the organism against external attack. In order to uncover these changes, we need to know what the ancestral genome looked like, which we can infer by comparing multiple genomes to one another. As we accumulate genomes from species related to human, and especially from within the primate lineages, we should be able to learn more about what makes humans special. At the same time, we can learn what makes each primate different from the others. Until recently, methods for detecting the effects of evolution had been designed for relatively distant species such as humans and mice. With the publication of the chimpanzee genome [1], we had our first look at a very close relative of human. The genomes of chimpanzees and humans are so close, in fact, that sequence similarity cannot be used to infer functional significance: in most cases, similarity simply reflects the recent divergence between the species. With more species, sequence comparison even among close relatives can be used to tease apart regions that are constrained by evolutionary forces and that, consequently, are likely to have functional importance to the biology of humans.
    Recently, the ENCODE project selected 13 primates (in addition to human) and sequenced 1% of each genome to produce "comparative grade" [2] assemblies. These high-quality sequences from close human relatives give us a greater ability than before to detect the signs of evolutionary selection on the human genome and other primates. The traces of evolution's effects can be found more easily when they are shared among multiple species. Signs of selection also may indicate functionally important sequences, and in particular they can be used to identify regulatory regions that fall outside protein-coding regions and are otherwise difficult to find.
    Broadly speaking, there are two main types of selective processes driving the evolution of genomes. Negative or purifying selection is the evolutionary pressure that eliminates deleterious mutations from a population. Most mutations in the genome are probably neutral, because most of the genome is itself non-functional, but within coding regions, the majority of mutations are deleterious [3]. Deleterious mutations are likely to be transient; i.e., they do not become fixed in the human population. Negative selection has been identified principally by pairwise sequence alignment methods, through which DNA or amino acid sequences can be shown to be more highly conserved than expected based on the overall evolutionary distance between a pair of species. By one well-known estimate, approximately 5% of the human genome is under negative selection [4], of which only 1.5% is contained in protein-coding exons.
    Positive selection is more difficult to detect. In positive selection, a region of the genome, protein coding or otherwise, accumulates beneficial mutations that provide a survival advantage to the organism. One way to detect positive selection is by the presence of genes that have acquired many more mutations than other genes when compared to close relatives. A well-documented example of positive selection is the rapid change in the hemagglutinin protein on the surface of the human influenza virus, which is in constant competition with the human immune system [5]. Positive selection must be carefully distinguished from the relaxation of selective constraints, however. If a sequence (a gene or a regulatory sequence) ceases to perform its function, and if that function is no longer needed by the organism, then it might accumulate mutations faster precisely because it is no longer functional.
    In this study, we describe a new method, called DivE, for detecting lineage-specific regions evolving at a slower or faster rate than the background evolutionary rate in the primate genomes. Other methods have been previously developed for detecting selection, but most look only at conservation of sequence (negative selection) in all aligned species, and are not lineage specific [6-12]. Methods to detect accelerated regions (i.e. regions evolving at faster-than-neutral rates) have also appeared recently [13-18]. Some of these methods allow for lineage-specific selection [14-16,18], but in contrast with conservation-detection methods, they cannot be easily used for genome-wide scans to detect selection, and look only at particular regions of interest. Although accelerated regions may indicate positive selection, this is not necessarily the case [19]. There are many examples where positive selection manifests itself at only a small number of sites [20-23]. Our method is not suited to the identification of positive selection in these cases.
    Recently a new program, phyloP, was developed to examine the more general problem of detecting either conserved or accelerated regions in a set of aligned orthologous sequences from multiple species [24]. PhyloP implements four different statistical phylogenetic tests to find significant departures from non-neutral substitution rates on a whole phylogeny as well as on selected subtrees (clades) of interest in the phylogeny. It was shown to have fairly good accuracy in detecting strong selection even at individual nucleotides. In one respect, DivE is similar to phyloP in that both methods try to solve the general problem of detecting an increase or a decrease in the rate of substitution in a given genomic region, either on a whole phylogeny or within a clade of the phylogeny.
    However, in phyloP the phylogenetic subtree of interest needs to be provided to the program, while in contrast DivE addresses the more complicated problem in which the lineage of interest is not pre-specified. Therefore the lineage under selection must be detected automatically by DivE from among all possible subtrees within a phylogeny. Another significant difference is that applying phyloP to an entire genome to detect selection involves using a sliding window approach. Although a sliding-window analysis is a popular method to test for negative or positive selection, there are results that show that this approach is not generally valid if selective trends are not known a priori in a given region [25]. In addition, the sensitivity of phyloP is dependent on the size of the window used to scan the genome, which in turn depends on the number of species available. DivE doesn't use a sliding window approach, but instead tries to determine the optimal size for the selected genomic element that is predicted to be under selection. In regard to these differences, DivE is more similar to DLESS [26], a method that detects sequences that have either come under selection, or begun to drift, in any lineage. While DLESS only allows for detection of a "gain" event (conservation in a phylogenetic subtree) or a "loss" event (where a subtree is evolving neutrally while the rest of the tree is conserved), DivE also detects acceleration events in any clade of the tree. DLESS is the only other computational method, prior to DivE, that can detect lineage-specific selection when the lineage of interest is not pre-specified.
    Below we present our method for detecting both conserved and accelerated regions and apply it to 14 primate genomes. We describe results on simulated and real data, including the identification of positively selected genes that intersect regions evolving faster than the neutral mutation rate. The method described in this paper is implemented in the DivE package which is available as free, open-source software [27].

    Results

    Simulation results

    For our simulation tests, we created sequence elements that were both positively and negatively selected within the same 14 primate species used for our later experiments on real data. Because we knew the precise location, size, and type of selection involved in each element, we could use this data to evaluated the accuracy of DivE and compare it to other methods.
    We created simulated data sets that contain selected elements of lengths between 50 bp and 1000 bp in all subtrees of the phylogeny of the 14 primates (see Figure 1 and Methods for a description of the primate phylogeny). Conserved elements are either "gained" or "lost" on a particular lineage, where a "gain" event implies that the region defined by that particular lineage will experience selective pressure that will tend to eliminate individuals with mutations in that region (i.e., negative or purifying selection). A "loss" event implies that the region in question does not have evolutionary constraints, and will evolve at the neutral substitution rate, while the rest of the tree is constrained. The average substitution rate observed for conserved elements is a fraction of the non-conserved regions, and we therefore can simulate negative selection by reducing the branch lengths of the selected subtree (for gain) or supertree (for loss), as depicted in Figure 2. For accelerated elements, the observed substitution rate is greater than the neutral rate.

    Stem Cell Research & Therapy tracked

    Stem Cell Research & Therapy is expected to receive its first Impact Factor in June 2012, having been accepted for tracking by Thomson Reuters (ISI). The journal launched in March 2010 and recently marked its first anniversary. Congratulations to Rocky Tuan and Timothy O'Brien, the Editors-in-Chief, and the Editorial Board for ensuring such a strong start for the journal, as indicated by the fact it will be indexed from Volume 1.

    In addition to outstanding open access research articles with a focus on stem cell therapeutics, such as a recent article from Deborah Sullivan's group investigating how human multipotent stromal cells attenuate inflammation in acute lung injury, Stem Cell Research & Therapy also publishes in-depth reviews and commentaries on the latest developments in stem cell research, which are available by subscription. Why not visit the website and check out our most popular articles, including Maya Sieber Blum's commentary on epidermal stem cell dynamics, and a review from Johnny Huard and colleagues discussing the paracrine effect of transplanted stem cells in regeneration?

    BioMed Central and ISCB dancing to the same tune

    Genome Biology’s recent waltz around the 19th Annual Conference on Intelligent Systems in Molecular Biology (ISMB) in Vienna gave us a quickstep tour around the current developments in the computational biology community, including advances in the fields of personalized medicine and the genomic “data deluge”.

    It also highlighted the acute awareness that computational biologists have of the importance of sharing data.

    The International Society for Computational Biology (ISCB), who organized the conference, are stepping up and taking the lead when it comes to endorsing the free availability of data and open access to research—a theme that was heavily present among this year's speakers. In a compelling announcement published last year, the society released the following public policy statement in support of open access:

    “The International Society for computational Biology strongly advocates free, open, public, online (i) access by person or machine to the publicly funded archival scientific and technical research literature; and (ii) computational reuse, integration, and distillation of that literature into higher-order knowledge elements”

    This ethos is fully endorsed by BioMed Central. All articles published with us have always been freely available to anyone wishing to access them, and all of our journals require that readily reproducible materials be freely available to any scientist wishing to use them for non-commercial purposes. For computational journals such as BMC Bioinformatics, we ensure that all software and data is made freely available, and we strongly encourage the deposition of all source code—the nuts and bolts of how an application works—as an additional file with every article.

    Other journals within the BioMed Central portfolio are already striking up new initiatives to create more transparent access to open data, such as the recent launch of the “Availability of Supporting Data” section in BMC Research Notes. Similarly, some are tackling the problem of deposition of massive datasets for the post-genomic era, such as GigaScience (in collaboration with the BGI in China), one of our newly launched titles.

    We also endorse all efforts to bring science to the wider community, especially where subscription costs may be prohibitive to knowledge access. BioMed Central’s initiatives to aid researchers in developing economies, and efforts to promote computational biology through the Open Access Africa program, should go some way to realising the ISCB’s aims to "empower citizens and scientists".

    In his keynote speech to close the conference, the winner of this year’s “ISCB Accomplishment by a Senior Scientist Award”, Michael Ashburner, was characteristically clear. Addressing a crowded auditorium, the pioneering computational biologist stated that he was “absolutely committed to open source publishing”, urging young researchers to “try to do your research in an open way” because “if science is not public, then it is not science”.

    Guided Meditation - Path to Physical And Mental Well-Being

    If stress and anxiety are taking over your life, you should get help of guided meditation as your stress management resource. Learn how to meditate and free your mind of worries in order to lead a healthy and peaceful life.
    The high-speed lifestyle that all of us lead today doesn't leave us much of choice other than juggling a number of tasks. Some of your time is spent enlisting the jobs that are needed to be done, some carrying out the jobs and the rest in worrying about the ones that you weren't able to finish. All this leaves you stressed out. Meditation is a great way to shift your focus from anxiety to peace. It is a complementary medicine that cures all three, mind, body and soul of a person. Meditation can help you release stress and lead a happy and peaceful life.
    By providing a state of relaxation, it helps you remove the troubled thoughts that cloud your mind and cause stress. Moreover, it helps you focus your attention resulting in improved physical and metal well-being. Studies have found that meditation is beneficial in treating various health conditions that are worsened by stress. Some of them are high blood pressure, chronic pain, fatigue, sleeping habits, depression and others. Furthermore, it also helps you control your anger and anxiety.
    When you meditate, all your tensions seem to go away. Releasing stress isn't just limited to the meditation session, it gradually eliminates the stress from your life given that you practice it on a regular basis. The true profoundness of meditation lies in the stability that you develop overtime. It helps you lead a life that remains unperturbed by anxiety and stress. Though difficulties will still come along at times, you will be able to handle them with much ease and won't be troubled by negative thoughts.
    The technique of meditation requires much practicing and can be mastered overtime. For learning to meditate, however, you need the guidance of an expert. Searching online is the easiest way to find an experienced meditation practitioner. There are some experts who provide online podcast about meditation. From teaching you the basics as well as the latest findings on meditations, helping you achieve a relaxed state of mind to guided meditations for listening while you meditate, they share expert tips and help you find what works best for you to release tension.
    Listening to a podcast is so much a better option for learning to meditate than reading about it. During a meditation podcast, you are guided by the soothing voice of a meditation expert to help you relax. Since your mind needs reason to be calm and peaceful, it should be preferred if someone guides you, taking your mind off the worries and helping you find inspiration to feel fresh and relaxed. You need someone who holds years of experience in meditation to free you of your stress.
    Besides the online meditation podcast, several clinics also hold meditation sessions where you can join meditation gurus as well as others coping with stress. Find one such clinic in your city and take an action against the stress that continues to rise. Don't wait until you have a breakdown! Contact a clinical psychologist who specializes in meditation to help you control your stress before your body goes into a tailspin.