Scientists have known for some time that a protein called presenilin plays a role in Alzheimer's disease, and a new study reveals one intriguing way this happens.
It has to do with how materials travel up and down brain cells, which are also called neurons.
In an Oct. 8 paper in Human Molecular Genetics, University at Buffalo researchers report that presenilin works with an enzyme called GSK-3ß to control how fast materials -- like proteins needed for cell survival -- move through the cells.
"If you have too much presenilin or too little, it disrupts the activity of GSK-3ß, and the transport of cargo along neurons becomes uncoordinated," says lead researcher Shermali Gunawardena, PhD, an assistant professor of biological sciences at UB. "This can lead to dangerous blockages."
More than 150 mutations of presenilin have been found in Alzheimer's patients, and scientists have previously shown that the protein, when defective, can cause neuronal blockages by snipping another protein into pieces that accumulate in brain cells.
But this well-known mechanism isn't the only way presenilin fuels disease, as Gunawardena's new study shows.
"Our work elucidates how problems with presenilin could contribute to early problems observed in Alzheimer's disease," she says. "It highlights a potential pathway for early intervention through drugs -- prior to neuronal loss and clinical manifestations of disease."
The study suggests that presenilin activates GSK-3ß. This is an important finding because the enzyme helps control the speed at which tiny, organic bubbles called vesicles ferry cargo along neuronal highways. (You can think of vesicles as trucks, each powered by little molecular motors called dyneins and kinesins.)
When researchers lowered the amount of presenilin in the neurons of fruit fly larvae, less GSK-3ß became activated and vesicles began speeding along cells in an uncontrolled manner.
Decreasing levels of both presenilin and GSK-3ß at once made things worse, resulting in "traffic jams" as the bubbles got stuck in neurons.
"Both GSK-3ß and presenilin have been shown to be involved in Alzheimer's disease, but how they are involved has not always been clear," Gunawardena says. "Our research provides new insight into this question."
Gunawardena proposes that GSK-3ß -- short for glycogen synthase kinase-3beta -- acts as an "on switch" for dynein and kynesin motors, telling them when to latch onto vesicles.
Dyneins carry vesicles toward the cell nucleus, while kinesins move in the other direction, toward the periphery of the cell. When all is well and GSK-3ß levels are normal, both types of motors bind to vesicles in carefully calibrated numbers, resulting in smooth traffic flow along neurons.
That's why it's so dangerous when GSK-3ß levels are off-kilter, she says.
When GSK-3ß levels are high, too many motors attach to the vesicles, leading to slow movement as motor activity loses coordination. Low GSK-3ß levels appear to have the opposite effect, causing fast, uncontrolled movement as too few motors latch onto vesicles.
Nurturing May Protect Kids from Brain Changes Linked to Poverty!
Posted Under:
Growing up in poverty can have long-lasting, negative consequences for a child. But for poor children raised by parents who lack nurturing skills, the effects may be particularly worrisome, according to a new study at Washington University School of Medicine in St. Louis.
Among children living in poverty, the researchers identified changes in the brain that can lead to lifelong problems like depression, learning difficulties and limitations in the ability to cope with stress. The study showed that the extent of those changes was influenced strongly by whether parents were nurturing.
The good news, according to the researchers, is that a nurturing home life may offset some of the negative changes in brain anatomy among poor children. And the findings suggest that teaching nurturing skills to parents -- particularly those living in poverty -- may provide a lifetime benefit for their children.
The study is published online Oct. 28 and will appear in the November issue of JAMA Pediatrics.
Using magnetic resonance imaging (MRI), the researchers found that poor children with parents who were not very nurturing were likely to have less gray and white matter in the brain. Gray matter is closely linked to intelligence, while white matter often is linked to the brain's ability to transmit signals between various cells and structures.
The MRI scans also revealed that two key brain structures were smaller in children who were living in poverty: the amygdala, a key structure in emotional health, and the hippocampus, an area of the brain that is critical to learning and memory.
"We've known for many years from behavioral studies that exposure to poverty is one of the most powerful predictors of poor developmental outcomes for children," said principal investigator Joan L. Luby, MD, a Washington University child psychiatrist at St. Louis Children's Hospital. "A growing number of neuroscience and brain-imaging studies recently have shown that poverty also has a negative effect on brain development.
"What's new is that our research shows the effects of poverty on the developing brain, particularly in the hippocampus, are strongly influenced by parenting and life stresses that the children experience."
Luby, a professor of psychiatry and director of the university's Early Emotional Development Program, is in the midst of a long-term study of childhood depression. As part of the Preschool Depression Study, she has been following 305 healthy and depressed kids since they were in preschool. As the children have grown, they also have received MRI scans that track brain development.
"We actually stumbled upon this finding," she said. "Initially, we thought we would have to control for the effects of poverty, but as we attempted to control for it, we realized that poverty was really driving some of the outcomes of interest, and that caused us to change our focus to poverty, which was not the initial aim of this study."
In the new study, Luby's team looked at scans from 145 children enrolled in the depression study. Some were depressed, others healthy, and others had been diagnosed with different psychiatric disorders such as ADHD (attention-deficit hyperactivity disorder). As she studied these children, Luby said it became clear that poverty and stressful life events, which often go hand in hand, were affecting brain development.
The researchers measured poverty using what's called an income-to-needs ratio, which takes a family's size and annual income into account. The current federal poverty level is $23,550 for a family of four.
Although the investigators found that poverty had a powerful impact on gray matter, white matter, hippocampal and amygdala volumes, they found that the main driver of changes among poor children in the volume of the hippocampus was not lack of money but the extent to which poor parents nurture their children. The hippocampus is a key brain region of interest in studying the risk for impairments.
Luby's team rated nurturing using observations made by the researchers -- who were unaware of characteristics such as income level or whether a child had a psychiatric diagnosis -- when the children came to the clinic for an appointment. And on one of the clinic visits, the researchers rated parental nurturing using a test of the child's impatience and of a parent's patience with that child.
While waiting to see a health professional, a child was given a gift-wrapped package, and that child's parent or caregiver was given paperwork to fill out. The child, meanwhile, was told that s/he could not open the package until the caregiver completed the paperwork, a task that researchers estimated would take about 10 minutes.
Luby's team found that parents living in poverty appeared more stressed and less able to nurture their children during that exercise. In cases where poor parents were rated as good nurturers, the children were less likely to exhibit the same anatomical changes in the brain as poor children with less nurturing parents.
"Parents can be less emotionally responsive for a whole host of reasons," Luby said. "They may work two jobs or regularly find themselves trying to scrounge together money for food. Perhaps they live in an unsafe environment. They may be facing many stresses, and some don't have the capacity to invest in supportive parenting as much as parents who don't have to live in the midst of those adverse circumstances."
The researchers also found that poorer children were more likely to experience stressful life events, which can influence brain development. Anything from moving to a new house to changing schools to having parents who fight regularly to the death of a loved one is considered a stressful life event.
Luby believes this study could provide policymakers with at least a partial answer to the question of what it is about poverty that can be so detrimental to a child's long-term developmental outcome. Because it appears that a nurturing parent or caregiver may prevent some of the changes in brain anatomy that this study identified, Luby said it is vital that society invest in public health prevention programs that target parental nurturing skills. She suggested that a key next step would be to determine if there are sensitive developmental periods when interventions with parents might have the most powerful impact.
"Children who experience positive caregiver support don't necessarily experience the developmental, cognitive and emotional problems that can affect children who don't receive as much nurturing, and that is tremendously important," Luby said. "This study gives us a feasible, tangible target with the suggestion that early interventions that focus on parenting may provide a tremendous payoff."
Among children living in poverty, the researchers identified changes in the brain that can lead to lifelong problems like depression, learning difficulties and limitations in the ability to cope with stress. The study showed that the extent of those changes was influenced strongly by whether parents were nurturing.
The good news, according to the researchers, is that a nurturing home life may offset some of the negative changes in brain anatomy among poor children. And the findings suggest that teaching nurturing skills to parents -- particularly those living in poverty -- may provide a lifetime benefit for their children.
The study is published online Oct. 28 and will appear in the November issue of JAMA Pediatrics.
Using magnetic resonance imaging (MRI), the researchers found that poor children with parents who were not very nurturing were likely to have less gray and white matter in the brain. Gray matter is closely linked to intelligence, while white matter often is linked to the brain's ability to transmit signals between various cells and structures.
The MRI scans also revealed that two key brain structures were smaller in children who were living in poverty: the amygdala, a key structure in emotional health, and the hippocampus, an area of the brain that is critical to learning and memory.
"We've known for many years from behavioral studies that exposure to poverty is one of the most powerful predictors of poor developmental outcomes for children," said principal investigator Joan L. Luby, MD, a Washington University child psychiatrist at St. Louis Children's Hospital. "A growing number of neuroscience and brain-imaging studies recently have shown that poverty also has a negative effect on brain development.
"What's new is that our research shows the effects of poverty on the developing brain, particularly in the hippocampus, are strongly influenced by parenting and life stresses that the children experience."
Luby, a professor of psychiatry and director of the university's Early Emotional Development Program, is in the midst of a long-term study of childhood depression. As part of the Preschool Depression Study, she has been following 305 healthy and depressed kids since they were in preschool. As the children have grown, they also have received MRI scans that track brain development.
"We actually stumbled upon this finding," she said. "Initially, we thought we would have to control for the effects of poverty, but as we attempted to control for it, we realized that poverty was really driving some of the outcomes of interest, and that caused us to change our focus to poverty, which was not the initial aim of this study."
In the new study, Luby's team looked at scans from 145 children enrolled in the depression study. Some were depressed, others healthy, and others had been diagnosed with different psychiatric disorders such as ADHD (attention-deficit hyperactivity disorder). As she studied these children, Luby said it became clear that poverty and stressful life events, which often go hand in hand, were affecting brain development.
The researchers measured poverty using what's called an income-to-needs ratio, which takes a family's size and annual income into account. The current federal poverty level is $23,550 for a family of four.
Although the investigators found that poverty had a powerful impact on gray matter, white matter, hippocampal and amygdala volumes, they found that the main driver of changes among poor children in the volume of the hippocampus was not lack of money but the extent to which poor parents nurture their children. The hippocampus is a key brain region of interest in studying the risk for impairments.
Luby's team rated nurturing using observations made by the researchers -- who were unaware of characteristics such as income level or whether a child had a psychiatric diagnosis -- when the children came to the clinic for an appointment. And on one of the clinic visits, the researchers rated parental nurturing using a test of the child's impatience and of a parent's patience with that child.
While waiting to see a health professional, a child was given a gift-wrapped package, and that child's parent or caregiver was given paperwork to fill out. The child, meanwhile, was told that s/he could not open the package until the caregiver completed the paperwork, a task that researchers estimated would take about 10 minutes.
Luby's team found that parents living in poverty appeared more stressed and less able to nurture their children during that exercise. In cases where poor parents were rated as good nurturers, the children were less likely to exhibit the same anatomical changes in the brain as poor children with less nurturing parents.
"Parents can be less emotionally responsive for a whole host of reasons," Luby said. "They may work two jobs or regularly find themselves trying to scrounge together money for food. Perhaps they live in an unsafe environment. They may be facing many stresses, and some don't have the capacity to invest in supportive parenting as much as parents who don't have to live in the midst of those adverse circumstances."
The researchers also found that poorer children were more likely to experience stressful life events, which can influence brain development. Anything from moving to a new house to changing schools to having parents who fight regularly to the death of a loved one is considered a stressful life event.
Luby believes this study could provide policymakers with at least a partial answer to the question of what it is about poverty that can be so detrimental to a child's long-term developmental outcome. Because it appears that a nurturing parent or caregiver may prevent some of the changes in brain anatomy that this study identified, Luby said it is vital that society invest in public health prevention programs that target parental nurturing skills. She suggested that a key next step would be to determine if there are sensitive developmental periods when interventions with parents might have the most powerful impact.
"Children who experience positive caregiver support don't necessarily experience the developmental, cognitive and emotional problems that can affect children who don't receive as much nurturing, and that is tremendously important," Luby said. "This study gives us a feasible, tangible target with the suggestion that early interventions that focus on parenting may provide a tremendous payoff."
Smart Neurons: Single Neuronal Dendrites Can Perform Computations
Posted Under:
When you look at the hands of a clock or the streets on a map, your brain is effortlessly performing computations that tell you about the orientation of these objects. New research by UCL scientists has shown that these computations can be carried out by the microscopic branches of neurons known as dendrites, which are the receiving elements of neurons.
The study, published today (Sunday) in Nature and carried out by researchers based at the Wolfson Institute for Biomedical Research at UCL, the MRC Laboratory for Molecular Biology in Cambridge and the University of North Carolina at Chapel Hill, examined neurons in areas of the mouse brain which are responsible for processing visual input from the eyes. The scientists achieved an important breakthrough: they succeeded in making incredibly challenging electrical and optical recordings directly from the tiny dendrites of neurons in the intact brain while the brain was processing visual information.
These recordings revealed that visual stimulation produces specific electrical signals in the dendrites -- bursts of spikes -- which are tuned to the properties of the visual stimulus.
The results challenge the widely held view that this kind of computation is achieved only by large numbers of neurons working together, and demonstrate how the basic components of the brain are exceptionally powerful computing devices in their own right.
Senior author Professor Michael Hausser commented: "This work shows that dendrites, long thought to simply 'funnel' incoming signals towards the soma, instead play a key role in sorting and interpreting the enormous barrage of inputs received by the neuron. Dendrites thus act as miniature computing devices for detecting and amplifying specific types of input.
"This new property of dendrites adds an important new element to the "toolkit" for computation in the brain. This kind of dendritic processing is likely to be widespread across many brain areas and indeed many different animal species, including humans."
Funding for this study was provided by the Gatsby Charitable Foundation, the Wellcome Trust, and the European Research Council, as well as the Human Frontier Science Program, the Klingenstein Foundation, Helen Lyng White, the Royal Society, and the Medical Research Council.
The study, published today (Sunday) in Nature and carried out by researchers based at the Wolfson Institute for Biomedical Research at UCL, the MRC Laboratory for Molecular Biology in Cambridge and the University of North Carolina at Chapel Hill, examined neurons in areas of the mouse brain which are responsible for processing visual input from the eyes. The scientists achieved an important breakthrough: they succeeded in making incredibly challenging electrical and optical recordings directly from the tiny dendrites of neurons in the intact brain while the brain was processing visual information.
These recordings revealed that visual stimulation produces specific electrical signals in the dendrites -- bursts of spikes -- which are tuned to the properties of the visual stimulus.
The results challenge the widely held view that this kind of computation is achieved only by large numbers of neurons working together, and demonstrate how the basic components of the brain are exceptionally powerful computing devices in their own right.
Senior author Professor Michael Hausser commented: "This work shows that dendrites, long thought to simply 'funnel' incoming signals towards the soma, instead play a key role in sorting and interpreting the enormous barrage of inputs received by the neuron. Dendrites thus act as miniature computing devices for detecting and amplifying specific types of input.
"This new property of dendrites adds an important new element to the "toolkit" for computation in the brain. This kind of dendritic processing is likely to be widespread across many brain areas and indeed many different animal species, including humans."
Funding for this study was provided by the Gatsby Charitable Foundation, the Wellcome Trust, and the European Research Council, as well as the Human Frontier Science Program, the Klingenstein Foundation, Helen Lyng White, the Royal Society, and the Medical Research Council.
Need Different Types of Tissue? Just Print Them!
Posted Under:
What sounds like a dream of the future has already been the subject of research for a few years: simply printing out tissue and organs. Now scientists have further refined the technology and are able to produce various tissue types.
The recent organ transplant scandals have only made the problem worse. According to the German Organ Transplantation Foundation (DSO), the number of organ donors in the first half of 2013 has declined more than 18 percent in comparison to the same period the previous year. At the same time, one can assume that the demand in the next years will continuously rise, because we continue to age and field of transplantation medicine is continuously advancing. Many critical illnesses can already be successfully treated today by replacing cells, tissue, or organs. Government, industry, and the research establishment have therefore been working hard for some time to improve methods and procedures for artificially producing tissue. This is how the gap in supply is supposed to be closed.
Bio-ink made from living cells
One technology might assume a decisive role in this effort, one that we are all familiar with from the office, and that most of us would certainly not immediately connect with the production of artificial tissue: the inkjet printer. Scientists of the Fraunhofer Institute for Interfacial Engineering and Biotechnology (IGB) in Stuttgart have succeeded in deve- loping suitable bio-inks for this printing technology. The transparent liquids consist of components from the natural tissue matrix and living cells. The substance is based on a well known biological material: gelatin. Gelatin is derived from collagen, the main constituent of native tissue. The researchers have chemically modified the gelling behavior of the gelatin to adapt the biological molecules for printing. Instead of gelling like unmodified gelatin, the bio-inks remain fluid during printing. Only after they are irradiated with UV light, they crosslink and cure to form hydrogels. These are polymers containing a huge amount of water (just like native tissue), but which are stable in aqueous environments and when being warmed up to physiological 37°C. The researchers can control the chemical modification of the biological molecules so that the resulting gels have differing strengths and swelling characteristics. The properties of natural tissue can therefore be imitated -- from solid cartilage to soft adipose tissue.
In Stuttgart synthetic raw materials are printed as well that can serve as substitutes for the extracellular matrix. For example a system that cures to a hydrogel devoid of by-products, and can be immediately populated with genuine cells. "We are concentrating at the moment on the 'natural' variant. That way we remain very close to the original material. Even if the potential for synthetic hydrogels is big, we still need to learn a fair amount about the interactions between the artificial substances and cells or natural tissue. Our biomolecule-based variants provide the cells with a natural environment instead, and therefore can promote the self-organizing behavior of the printed cells to form a functional tissue model," explains Dr. Kirsten Borchers in describing the approach at IGB.
The printers at the labs in Stuttgart have a lot in common with conventional office printers: the ink reservoirs and jets are all the same. The differences are discovered only under close inspection. For example, the heater on the ink container with which the right temperature of the bio-inks is set. The number of jets and tanks is smaller than in the office counterpart as well. "We would like to increase the number of these in cooperation with industry and other Fraunhofer Institutes in order to simultaneously print using various inks with different cells and matrices. This way we can come closer to replicating complex structures and different types of tissue," says Borchers.
The big challenge at the moment is to produce vascularized tissue. This means tissue that has its own system of blood vessels through which the tissue can be provided with nutrients. IGB is working on this jointly with other partners under Project ArtiVasc 3D, supported by the European Union. The core of this project is a technology platform to generate fine blood vessels from synthetic materials and thereby create for the first time artificial skin with its subcutaneous adipose tissue. "This step is very important for printing tissue or entire organs in the future. Only once we are successful in producing tissue that can be nourished through a system of blood vessels can printing larger tissue structures become feasible," says Borchers in closing. She will be exhibiting the IGB bioinks at Biotechnica in Hanover, 8-10 October 2013 (Hall 9, Booth E09).
The recent organ transplant scandals have only made the problem worse. According to the German Organ Transplantation Foundation (DSO), the number of organ donors in the first half of 2013 has declined more than 18 percent in comparison to the same period the previous year. At the same time, one can assume that the demand in the next years will continuously rise, because we continue to age and field of transplantation medicine is continuously advancing. Many critical illnesses can already be successfully treated today by replacing cells, tissue, or organs. Government, industry, and the research establishment have therefore been working hard for some time to improve methods and procedures for artificially producing tissue. This is how the gap in supply is supposed to be closed.
Bio-ink made from living cells
One technology might assume a decisive role in this effort, one that we are all familiar with from the office, and that most of us would certainly not immediately connect with the production of artificial tissue: the inkjet printer. Scientists of the Fraunhofer Institute for Interfacial Engineering and Biotechnology (IGB) in Stuttgart have succeeded in deve- loping suitable bio-inks for this printing technology. The transparent liquids consist of components from the natural tissue matrix and living cells. The substance is based on a well known biological material: gelatin. Gelatin is derived from collagen, the main constituent of native tissue. The researchers have chemically modified the gelling behavior of the gelatin to adapt the biological molecules for printing. Instead of gelling like unmodified gelatin, the bio-inks remain fluid during printing. Only after they are irradiated with UV light, they crosslink and cure to form hydrogels. These are polymers containing a huge amount of water (just like native tissue), but which are stable in aqueous environments and when being warmed up to physiological 37°C. The researchers can control the chemical modification of the biological molecules so that the resulting gels have differing strengths and swelling characteristics. The properties of natural tissue can therefore be imitated -- from solid cartilage to soft adipose tissue.
In Stuttgart synthetic raw materials are printed as well that can serve as substitutes for the extracellular matrix. For example a system that cures to a hydrogel devoid of by-products, and can be immediately populated with genuine cells. "We are concentrating at the moment on the 'natural' variant. That way we remain very close to the original material. Even if the potential for synthetic hydrogels is big, we still need to learn a fair amount about the interactions between the artificial substances and cells or natural tissue. Our biomolecule-based variants provide the cells with a natural environment instead, and therefore can promote the self-organizing behavior of the printed cells to form a functional tissue model," explains Dr. Kirsten Borchers in describing the approach at IGB.
The printers at the labs in Stuttgart have a lot in common with conventional office printers: the ink reservoirs and jets are all the same. The differences are discovered only under close inspection. For example, the heater on the ink container with which the right temperature of the bio-inks is set. The number of jets and tanks is smaller than in the office counterpart as well. "We would like to increase the number of these in cooperation with industry and other Fraunhofer Institutes in order to simultaneously print using various inks with different cells and matrices. This way we can come closer to replicating complex structures and different types of tissue," says Borchers.
The big challenge at the moment is to produce vascularized tissue. This means tissue that has its own system of blood vessels through which the tissue can be provided with nutrients. IGB is working on this jointly with other partners under Project ArtiVasc 3D, supported by the European Union. The core of this project is a technology platform to generate fine blood vessels from synthetic materials and thereby create for the first time artificial skin with its subcutaneous adipose tissue. "This step is very important for printing tissue or entire organs in the future. Only once we are successful in producing tissue that can be nourished through a system of blood vessels can printing larger tissue structures become feasible," says Borchers in closing. She will be exhibiting the IGB bioinks at Biotechnica in Hanover, 8-10 October 2013 (Hall 9, Booth E09).
Pressure in the Left Heart - Part 2
Posted Under:
Watch the pressure in the left heart go up and down with every heart beat! He is a pediatric infectious disease physician. These videos do not provide medical advice and are for informational purposes only. The videos are not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of a qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read or seen in any video.
Pressure in the Left Heart - Part 1
Posted Under:
Watch the pressure in the left heart go up and down with every heart beat! Rishi is a pediatric infectious disease physician and works at Khan Academy. These videos do not provide medical advice and are for informational purposes only. The videos are not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of a qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read or seen in any video.
Cardiac Cycle Broken Down
Posted Under:
This is about Cardiac Cycle Broken Down!You can find this video and other helpful videos/materials (practice sheet and questions) on my website: www.profroofs.com The Wiggers Diagram of the Cardiac Cycle is usually a difficult diagram to first understand. In this video I relate it to an analogy of a haunted house. I walk you first through the basic ideas you already know to the point of understanding the detailed diagram. Additionally, I give a few practice questions at the end. I hope this is helpful. If there are any questions then contact me here,mail2jenakan2@gmail.com
Cardiac Cycle
Posted Under:
This is about Cardiac Cycle! Hello freinds you can read or watch about cardiac cycle description of the events occuring during ventricular systole and diastole, and a discussion of cardiac output.
Gravitational Waves Help Us Understand Black-Hole Weight Gain
Posted Under:
Supermassive black holes: every large galaxy's got one. But here's a real conundrum: how did they grow so big?
A paper in today's issue of Science pits the front-running ideas about the growth of supermassive black holes against observational data -- a limit on the strength of gravitational waves, obtained with CSIRO's Parkes radio telescope in eastern Australia.
"This is the first time we've been able to use information about gravitational waves to study another aspect of the Universe -- the growth of massive black holes," co-author Dr Ramesh Bhat from the Curtin University node of the International Centre for Radio Astronomy Research (ICRAR) said.
"Black holes are almost impossible to observe directly, but armed with this powerful new tool we're in for some exciting times in astronomy. One model for how black holes grow has already been discounted, and now we're going to start looking at the others."
The study was jointly led by Dr Ryan Shannon, a Postdoctoral Fellow with CSIRO, and Mr Vikram Ravi, a PhD student co-supervised by the University of Melbourne and CSIRO.
Einstein predicted gravitational waves -- ripples in space-time, generated by massive bodies changing speed or direction, bodies like pairs of black holes orbiting each other.
When galaxies merge, their central black holes are doomed to meet. They first waltz together then enter a desperate embrace and merge.
"When the black holes get close to meeting they emit gravitational waves at just the frequency that we should be able to detect," Dr Bhat said.
Played out again and again across the Universe, such encounters create a background of gravitational waves, like the noise from a restless crowd.
Astronomers have been searching for gravitational waves with the Parkes radio telescope and a set of 20 small, spinning stars called pulsars.
Pulsars act as extremely precise clocks in space. The arrival time of their pulses on Earth are measured with exquisite precision, to within a tenth of a microsecond.
When the waves roll through an area of space-time, they temporarily swell or shrink the distances between objects in that region, altering the arrival time of the pulses on Earth.
The Parkes Pulsar Timing Array (PPTA), and an earlier collaboration between CSIRO and Swinburne University, together provide nearly 20 years worth of timing data. This isn't long enough to detect gravitational waves outright, but the team say they're now in the right ballpark.
"The PPTA results are showing us how low the background rate of gravitational waves is," said Dr Bhat.
"The strength of the gravitational wave background depends on how often supermassive black holes spiral together and merge, how massive they are, and how far away they are. So if the background is low, that puts a limit on one or more of those factors."
Armed with the PPTA data, the researchers tested four models of black-hole growth. They effectively ruled out black holes gaining mass only through mergers, but the other three models are still a possibility.
Dr Bhat also said the Curtin University-led Murchison Widefield Array (MWA) radio telescope will be used to support the PPTA project in the future.
"The MWA's large view of the sky can be exploited to observe many pulsars at once, adding valuable data to the PPTA project as well as collecting interesting information on pulsars and their properties," Dr Bhat said.
Brain May Flush out Toxins During Sleep; Sleep Clears Brain of Molecules Associated With Neurodegeneration: Study
Posted Under:
Hey Do you know?, A good night's rest may literally clear the mind. Using mice, researchers showed for the first time that the space between brain cells may increase during sleep, allowing the brain to flush out toxins that build up during waking hours. These results suggest a new role for sleep in health and disease. The study was funded by the National Institute of Neurological Disorders and Stroke (NINDS), part of the NIH.
Sleep changes the cellular structure of the brain. It appears to be a completely different state," said Maiken Nedergaard, M.D., D.M.Sc., co-director of the Center for Translational Neuromedicine at the University of Rochester Medical Center in New York, and a leader of the study.
For centuries, scientists and philosophers have wondered why people sleep and how it affects the brain. Only recently have scientists shown that sleep is important for storing memories. In this study, Dr. Nedergaard and her colleagues unexpectedly found that sleep may be also be the period when the brain cleanses itself of toxic molecules.
Their results, published in Science, show that during sleep a plumbing system called the glymphatic system may open, letting fluid flow rapidly through the brain. Dr. Nedergaard's lab recently discovered the glymphatic system helps control the flow of cerebrospinal fluid (CSF), a clear liquid surrounding the brain and spinal cord.
"It's as if Dr. Nedergaard and her colleagues have uncovered a network of hidden caves and these exciting results highlight the potential importance of the network in normal brain function," said Roderick Corriveau, Ph.D., a program director at NINDS.
Initially the researchers studied the system by injecting dye into the CSF of mice and watching it flow through their brains while simultaneously monitoring electrical brain activity. The dye flowed rapidly when the mice were unconscious, either asleep or anesthetized. In contrast, the dye barely flowed when the same mice were awake.
"We were surprised by how little flow there was into the brain when the mice were awake," said Dr. Nedergaard. "It suggested that the space between brain cells changed greatly between conscious and unconscious states."
To test this idea, the researchers used electrodes inserted into the brain to directly measure the space between brain cells. They found that the space inside the brains increased by 60 percent when the mice were asleep or anesthetized.
"These are some dramatic changes in extracellular space," said Charles Nicholson, Ph.D., a professor at New York University's Langone Medical Center and an expert in measuring the dynamics of brain fluid flow and how it influences nerve cell communication.
Certain brain cells, called glia, control flow through the glymphatic system by shrinking or swelling. Noradrenaline is an arousing hormone that is also known to control cell volume. Similar to using anesthesia, treating awake mice with drugs that block noradrenaline induced unconsciousness and increased brain fluid flow and the space between cells, further supporting the link between the glymphatic system and consciousness.
Previous studies suggest that toxic molecules involved in neurodegenerative disorders accumulate in the space between brain cells. In this study, the researchers tested whether the glymphatic system controls this by injecting mice with labeled beta-amyloid, a protein associated with Alzheimer's disease, and measuring how long it lasted in their brains when they were asleep or awake. Beta-amyloid disappeared faster in mice brains when the mice were asleep, suggesting sleep normally clears toxic molecules from the brain.
"These results may have broad implications for multiple neurological disorders," said Jim Koenig, Ph.D., a program director at NINDS. "This means the cells regulating the glymphatic system may be new targets for treating a range of disorders."
The results may also highlight the importance of sleep.
"We need sleep. It cleans up the brain," said Dr. Nedergaard.
Sleep changes the cellular structure of the brain. It appears to be a completely different state," said Maiken Nedergaard, M.D., D.M.Sc., co-director of the Center for Translational Neuromedicine at the University of Rochester Medical Center in New York, and a leader of the study.
For centuries, scientists and philosophers have wondered why people sleep and how it affects the brain. Only recently have scientists shown that sleep is important for storing memories. In this study, Dr. Nedergaard and her colleagues unexpectedly found that sleep may be also be the period when the brain cleanses itself of toxic molecules.
Their results, published in Science, show that during sleep a plumbing system called the glymphatic system may open, letting fluid flow rapidly through the brain. Dr. Nedergaard's lab recently discovered the glymphatic system helps control the flow of cerebrospinal fluid (CSF), a clear liquid surrounding the brain and spinal cord.
"It's as if Dr. Nedergaard and her colleagues have uncovered a network of hidden caves and these exciting results highlight the potential importance of the network in normal brain function," said Roderick Corriveau, Ph.D., a program director at NINDS.
Initially the researchers studied the system by injecting dye into the CSF of mice and watching it flow through their brains while simultaneously monitoring electrical brain activity. The dye flowed rapidly when the mice were unconscious, either asleep or anesthetized. In contrast, the dye barely flowed when the same mice were awake.
"We were surprised by how little flow there was into the brain when the mice were awake," said Dr. Nedergaard. "It suggested that the space between brain cells changed greatly between conscious and unconscious states."
To test this idea, the researchers used electrodes inserted into the brain to directly measure the space between brain cells. They found that the space inside the brains increased by 60 percent when the mice were asleep or anesthetized.
"These are some dramatic changes in extracellular space," said Charles Nicholson, Ph.D., a professor at New York University's Langone Medical Center and an expert in measuring the dynamics of brain fluid flow and how it influences nerve cell communication.
Certain brain cells, called glia, control flow through the glymphatic system by shrinking or swelling. Noradrenaline is an arousing hormone that is also known to control cell volume. Similar to using anesthesia, treating awake mice with drugs that block noradrenaline induced unconsciousness and increased brain fluid flow and the space between cells, further supporting the link between the glymphatic system and consciousness.
Previous studies suggest that toxic molecules involved in neurodegenerative disorders accumulate in the space between brain cells. In this study, the researchers tested whether the glymphatic system controls this by injecting mice with labeled beta-amyloid, a protein associated with Alzheimer's disease, and measuring how long it lasted in their brains when they were asleep or awake. Beta-amyloid disappeared faster in mice brains when the mice were asleep, suggesting sleep normally clears toxic molecules from the brain.
"These results may have broad implications for multiple neurological disorders," said Jim Koenig, Ph.D., a program director at NINDS. "This means the cells regulating the glymphatic system may be new targets for treating a range of disorders."
The results may also highlight the importance of sleep.
"We need sleep. It cleans up the brain," said Dr. Nedergaard.
Researchers Advance Toward Engineering 'Wildly New Genome'
Posted Under:
In two parallel projects, researchers have created new genomes inside the bacterium E. coli in ways that test the limits of genetic reprogramming and open new possibilities for increasing flexibility, productivity and safety in biotechnology.
In one project, researchers created a novel genome -- the first-ever entirely genomically recoded organism -- by replacing all 321 instances of a specific "genetic three-letter word," called a codon, throughout the organism's entire genome with a word of supposedly identical meaning. The researchers then reintroduced a reprogramed version of the original word (with a new meaning, a new amino acid) into the bacteria, expanding the bacterium's vocabulary and allowing it to produce proteins that do not normally occur in nature.
In the second project, the researchers removed every occurrence of 13 different codons across 42 separate E. coli genes, using a different organism for each gene, and replaced them with other codons of the same function. When they were done, 24 percent of the DNA across the 42 targeted genes had been changed, yet the proteins the genes produced remained identical to those produced by the original genes.
"The first project is saying that we can take one codon, completely remove it from the genome, then successfully reassign its function," said Marc Lajoie, a Harvard Medical School graduate student in the lab of George Church. "For the second project we asked, 'OK, we've changed this one codon, how many others can we change?'"
Of the 13 codons chosen for the project, all could be changed.
"That leaves open the possibility that we could potentially replace any or all of those 13 codons throughout the entire genome," Lajoie said.
The results of these two projects appear today in Science. The work was led by Church, Robert Winthrop Professor of Genetics at Harvard Medical School and founding core faculty member at the Wyss Institute for Biologically Inspired Engineering. Farren Isaacs, assistant professor of molecular, cellular, and developmental biology at Yale School of Medicine, is co-senior author on the first study.
Toward safer, more productive, more versatile biotech
Recoded genomes can confer protection against viruses -- which limit productivity in the biotech industry -- and help prevent the spread of potentially dangerous genetically engineered traits to wild organisms.
"In science we talk a lot about the 'what' and the 'how' of things, but in this case, the 'why' is very important," Church said, explaining how this project is part of an ongoing effort to improve the safety, productivity and flexibility of biotechnology.
"These results might also open a whole new chemical toolbox for biotech production," said Isaacs. "For example, adding durable polymers to a therapeutic molecule could allow it to function longer in the human bloodstream."
But to have such an impact, the researchers said, large swaths of the genome need to be changed all at once.
"If we make a few changes that make the microbe a little more resistant to a virus, the virus is going to compensate. It becomes a back and forth battle," Church said. "But if we take the microbe offline and make a whole bunch of changes, when we bring it back and show it to the virus, the virus is going to say 'I give up.' No amount of diversity in any reasonable natural virus population is going to be enough to compensate for this wildly new genome."
In the first study, with just a single codon removed, the genomically recoded organism showed increased resistance to viral infection. The same potential "wildly new genome" would make it impossible for engineered genes to escape into wild populations, Church said, because they would be incompatible with natural genomes. This could be of considerable benefit with strains engineered for drug or pesticide resistance, for example. What's more, incorporating rare, non-standard amino acids could ensure strains only survive in a laboratory environment.
Engineering and evolution
Since a single genetic flaw can spell death for an organism, the challenge of managing a series of hundreds of specific changes was daunting, the researchers said. In both projects, the researchers paid particular attention to developing a methodical approach to planning and implementing changes and troubleshooting the results.
"We wanted to develop the ability to efficiently build the desired genome and to very quickly identify any problems -- from design flaws or from undesired mutations -- and develop workarounds," Lajoie said.
The team relied on number oftechnologies developed in the Church lab and the Wyss Institute and with partners in academia and industry, including next-generation sequencing tools, DNA synthesis on a chip, and MAGE and CAGE genome editing tools. But one of the most important tools they used was the power of natural selection, the researchers added.
"When an engineering team designs a new cellphone, it's a huge investment of time and money. They really want that cell phone to work," Church said. "With E. coli we can make a few billion prototypes with many different genomes, and let the best strain win. That's the awesome power of evolution."
In one project, researchers created a novel genome -- the first-ever entirely genomically recoded organism -- by replacing all 321 instances of a specific "genetic three-letter word," called a codon, throughout the organism's entire genome with a word of supposedly identical meaning. The researchers then reintroduced a reprogramed version of the original word (with a new meaning, a new amino acid) into the bacteria, expanding the bacterium's vocabulary and allowing it to produce proteins that do not normally occur in nature.
In the second project, the researchers removed every occurrence of 13 different codons across 42 separate E. coli genes, using a different organism for each gene, and replaced them with other codons of the same function. When they were done, 24 percent of the DNA across the 42 targeted genes had been changed, yet the proteins the genes produced remained identical to those produced by the original genes.
"The first project is saying that we can take one codon, completely remove it from the genome, then successfully reassign its function," said Marc Lajoie, a Harvard Medical School graduate student in the lab of George Church. "For the second project we asked, 'OK, we've changed this one codon, how many others can we change?'"
Of the 13 codons chosen for the project, all could be changed.
"That leaves open the possibility that we could potentially replace any or all of those 13 codons throughout the entire genome," Lajoie said.
The results of these two projects appear today in Science. The work was led by Church, Robert Winthrop Professor of Genetics at Harvard Medical School and founding core faculty member at the Wyss Institute for Biologically Inspired Engineering. Farren Isaacs, assistant professor of molecular, cellular, and developmental biology at Yale School of Medicine, is co-senior author on the first study.
Toward safer, more productive, more versatile biotech
Recoded genomes can confer protection against viruses -- which limit productivity in the biotech industry -- and help prevent the spread of potentially dangerous genetically engineered traits to wild organisms.
"In science we talk a lot about the 'what' and the 'how' of things, but in this case, the 'why' is very important," Church said, explaining how this project is part of an ongoing effort to improve the safety, productivity and flexibility of biotechnology.
"These results might also open a whole new chemical toolbox for biotech production," said Isaacs. "For example, adding durable polymers to a therapeutic molecule could allow it to function longer in the human bloodstream."
But to have such an impact, the researchers said, large swaths of the genome need to be changed all at once.
"If we make a few changes that make the microbe a little more resistant to a virus, the virus is going to compensate. It becomes a back and forth battle," Church said. "But if we take the microbe offline and make a whole bunch of changes, when we bring it back and show it to the virus, the virus is going to say 'I give up.' No amount of diversity in any reasonable natural virus population is going to be enough to compensate for this wildly new genome."
In the first study, with just a single codon removed, the genomically recoded organism showed increased resistance to viral infection. The same potential "wildly new genome" would make it impossible for engineered genes to escape into wild populations, Church said, because they would be incompatible with natural genomes. This could be of considerable benefit with strains engineered for drug or pesticide resistance, for example. What's more, incorporating rare, non-standard amino acids could ensure strains only survive in a laboratory environment.
Engineering and evolution
Since a single genetic flaw can spell death for an organism, the challenge of managing a series of hundreds of specific changes was daunting, the researchers said. In both projects, the researchers paid particular attention to developing a methodical approach to planning and implementing changes and troubleshooting the results.
"We wanted to develop the ability to efficiently build the desired genome and to very quickly identify any problems -- from design flaws or from undesired mutations -- and develop workarounds," Lajoie said.
The team relied on number oftechnologies developed in the Church lab and the Wyss Institute and with partners in academia and industry, including next-generation sequencing tools, DNA synthesis on a chip, and MAGE and CAGE genome editing tools. But one of the most important tools they used was the power of natural selection, the researchers added.
"When an engineering team designs a new cellphone, it's a huge investment of time and money. They really want that cell phone to work," Church said. "With E. coli we can make a few billion prototypes with many different genomes, and let the best strain win. That's the awesome power of evolution."
Most Distant Gravitational Lens Helps Weigh Galaxies
Posted Under:
An international team of astronomers has found the most distant gravitational lens yet -- a galaxy that, as predicted by Albert Einstein's general theory of relativity, deflects and intensifies the light of an even more distant object. The discovery provides a rare opportunity to directly measure the mass of a distant galaxy. But it also poses a mystery: lenses of this kind should be exceedingly rare. Given this and other recent finds, astronomers either have been phenomenally lucky -- or, more likely, they have underestimated substantially the number of small, very young galaxies in the early Universe.
Light is affected by gravity, and light passing a distant galaxy will be deflected as a result. Since the first find in 1979, numerous such gravitational lenses have been discovered. In addition to providing tests of Einstein's theory of general relativity, gravitational lenses have proved to be valuable tools. Notably, one can determine the mass of the matter that is bending the light -- including the mass of the still-enigmatic dark matter, which does not emit or absorb light and can only be detected via its gravitational effects. The lens also magnifies the background light source, acting as a "natural telescope" that allows astronomers a more detailed look at distant galaxies than is normally possible.
Gravitational lenses consist of two objects: one is further away and supplies the light, and the other, the lensing mass or gravitational lens, which sits between us and the distant light source, and whose gravity deflects the light. When the observer, the lens, and the distant light source are precisely aligned, the observer sees an Einstein ring: a perfect circle of light that is the projected and greatly magnified image of the distant light source.
Now, astronomers have found the most distant gravitational lens yet. Lead author Arjen van der Wel (Max Planck Institute for Astronomy, Heidelberg, Germany) explains: "The discovery was completely by chance. I had been reviewing observations from an earlier project when I noticed a galaxy that was decidedly odd. It looked like an extremely young galaxy, but it seemed to be at a much larger distance than expected. It shouldn't even have been part of our observing programme!"
Van der Wel wanted to find out more and started to study images taken with the Hubble Space Telescope as part of the CANDELS and COSMOS surveys. In these pictures the mystery object looked like an old galaxy, a plausible target for the original observing programme, but with some irregular features which, he suspected, meant that he was looking at a gravitational lens. Combining the available images and removing the haze of the lensing galaxy's collection of stars, the result was very clear: an almost perfect Einstein ring, indicating a gravitational lens with very precise alignment of the lens and the background light source [1].
The lensing mass is so distant that the light, after deflection, has travelled 9.4 billion years to reach us [2]. Not only is this a new record, the object also serves an important purpose: the amount of distortion caused by the lensing galaxy allows a direct measurement of its mass. This provides an independent test for astronomers' usual methods of estimating distant galaxy masses -- which rely on extrapolation from their nearby cousins. Fortunately for astronomers, their usual methods pass the test.
But the discovery also poses a puzzle. Gravitational lenses are the result of a chance alignment. In this case, the alignment is very precise. To make matters worse, the magnified object is a starbursting dwarf galaxy: a comparatively light galaxy (it has only about 100 million solar masses in the form of stars [3]), but extremely young (about 10-40 million years old) and producing new stars at an enormous rate. The chances that such a peculiar galaxy would be gravitationally lensed is very small. Yet this is the second starbursting dwarf galaxy that has been found to be lensed. Either astronomers have been phenomenally lucky, or starbursting dwarf galaxies are much more common than previously thought, forcing astronomers to re-think their models of galaxy evolution.
Van der Wel concludes: "This has been a weird and interesting discovery. It was a completely serendipitous find, but it has the potential to start a new chapter in our description of galaxy evolution in the early Universe."
Notes
[1] The two objects are aligned to better than 0.01 arcseconds -- equivalent to a one millimetre separation at a distance of 20 kilometres.
[2] This time corresponds to a redshift z = 1.53. This can be compared with the total age of the Universe of 13.8 billion years. The previous record holder was found thirty years ago, and it took less than 8 billion years for its light to reach us (a redshift of about 1.0).
[3] For comparison, the Milky Way is a large spiral galaxy with at least one thousand times greater mass in the form of stars than this dwarf galaxy.
Light is affected by gravity, and light passing a distant galaxy will be deflected as a result. Since the first find in 1979, numerous such gravitational lenses have been discovered. In addition to providing tests of Einstein's theory of general relativity, gravitational lenses have proved to be valuable tools. Notably, one can determine the mass of the matter that is bending the light -- including the mass of the still-enigmatic dark matter, which does not emit or absorb light and can only be detected via its gravitational effects. The lens also magnifies the background light source, acting as a "natural telescope" that allows astronomers a more detailed look at distant galaxies than is normally possible.
Gravitational lenses consist of two objects: one is further away and supplies the light, and the other, the lensing mass or gravitational lens, which sits between us and the distant light source, and whose gravity deflects the light. When the observer, the lens, and the distant light source are precisely aligned, the observer sees an Einstein ring: a perfect circle of light that is the projected and greatly magnified image of the distant light source.
Now, astronomers have found the most distant gravitational lens yet. Lead author Arjen van der Wel (Max Planck Institute for Astronomy, Heidelberg, Germany) explains: "The discovery was completely by chance. I had been reviewing observations from an earlier project when I noticed a galaxy that was decidedly odd. It looked like an extremely young galaxy, but it seemed to be at a much larger distance than expected. It shouldn't even have been part of our observing programme!"
Van der Wel wanted to find out more and started to study images taken with the Hubble Space Telescope as part of the CANDELS and COSMOS surveys. In these pictures the mystery object looked like an old galaxy, a plausible target for the original observing programme, but with some irregular features which, he suspected, meant that he was looking at a gravitational lens. Combining the available images and removing the haze of the lensing galaxy's collection of stars, the result was very clear: an almost perfect Einstein ring, indicating a gravitational lens with very precise alignment of the lens and the background light source [1].
The lensing mass is so distant that the light, after deflection, has travelled 9.4 billion years to reach us [2]. Not only is this a new record, the object also serves an important purpose: the amount of distortion caused by the lensing galaxy allows a direct measurement of its mass. This provides an independent test for astronomers' usual methods of estimating distant galaxy masses -- which rely on extrapolation from their nearby cousins. Fortunately for astronomers, their usual methods pass the test.
But the discovery also poses a puzzle. Gravitational lenses are the result of a chance alignment. In this case, the alignment is very precise. To make matters worse, the magnified object is a starbursting dwarf galaxy: a comparatively light galaxy (it has only about 100 million solar masses in the form of stars [3]), but extremely young (about 10-40 million years old) and producing new stars at an enormous rate. The chances that such a peculiar galaxy would be gravitationally lensed is very small. Yet this is the second starbursting dwarf galaxy that has been found to be lensed. Either astronomers have been phenomenally lucky, or starbursting dwarf galaxies are much more common than previously thought, forcing astronomers to re-think their models of galaxy evolution.
Van der Wel concludes: "This has been a weird and interesting discovery. It was a completely serendipitous find, but it has the potential to start a new chapter in our description of galaxy evolution in the early Universe."
Notes
[1] The two objects are aligned to better than 0.01 arcseconds -- equivalent to a one millimetre separation at a distance of 20 kilometres.
[2] This time corresponds to a redshift z = 1.53. This can be compared with the total age of the Universe of 13.8 billion years. The previous record holder was found thirty years ago, and it took less than 8 billion years for its light to reach us (a redshift of about 1.0).
[3] For comparison, the Milky Way is a large spiral galaxy with at least one thousand times greater mass in the form of stars than this dwarf galaxy.
Curiosity Confirms Origins of Martian Meteorites
Posted Under:
Earth's most eminent emissary to Mars has just proven that those rare Martian visitors that sometimes drop in on Earth -- a.k.a. Martian meteorites -- really are from the Red Planet. A key new measurement of Mars' atmosphere by NASA's Curiosity rover provides the most definitive evidence yet of the origins of Mars meteorites while at the same time providing a way to rule out Martian origins of other meteorites.
The new measurement is a high-precision count of two forms of argon gas -- Argon-36 and Argon-38-accomplished by the Sample Analysis at Mars (SAM) instrument on Curiosity. These lighter and heavier forms, or isotopes, of argon exist naturally throughout the solar system. But on Mars the ratio of light to heavy argon is skewed because a lot of that planet's original atmosphere was lost to space, with the lighter form of argon being taken away more readily because it rises to the top of the atmosphere more easily and requires less energy to escape. That's left the Martian atmosphere relatively enriched in the heavier Argon-38.
Years of past analyses by Earth-bound scientists of gas bubbles trapped inside Martian meteorites had already narrowed the Martian argon ratio to between 3.6 and 4.5 (that is 3.6 to 4.5 atoms of Argon-36 to every one Argon-38) with the supposed Martian "atmospheric" value near four. Measurements by NASA's Viking landers in the 1970's put the Martian atmospheric ratio in the range of four to seven. The new SAM direct measurement on Mars now pins down the correct argon ratio at 4.2.
"We really nailed it," said Sushil Atreya of the University of Michigan, Ann Arbor, the lead author of a paper reporting the finding today in Geophysical Research Letters, a journal of the American Geophysical Union. "This direct reading from Mars settles the case with all Martian meteorites," he said.
One of the reasons scientists have been so interested in the argon ratio in Martian meteorites is that it was -- before Curiosity -- the best measure of how much atmosphere Mars has lost since the planet's earlier, wetter, warmer days billions of years ago. Figuring out the planet's atmospheric loss would enable scientists to better understand how Mars transformed from a once water-rich planet more like our own to the today's drier, colder and less hospitable world.
Had Mars held onto its entire atmosphere and its original argon, Atreya explained, its ratio of the gas would be the same as that of the Sun and Jupiter. They have so much gravity that isotopes can't preferentially escape, so their argon ratio -- which is 5.5 -- represents that of the primordial solar system.
While argon comprises only a tiny fraction of the gases lost to space from Mars, it is special because it's a noble gas. That means the gas is inert, not reacting with other elements or compounds, and therefore a more straightforward tracer of the history of the Martian atmosphere.
"Other isotopes measured by SAM on Curiosity also support the loss of atmosphere, but none so directly as argon," said Atreya. "Argon is the clearest signature of atmospheric loss because it's chemically inert and does not interact or exchange with the Martian surface or the interior. This was a key measurement that we wanted to carry out on SAM."
NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Curiosity mission for NASA's Science Mission Directorate, Washington. The SAM investigation on the rover is managed by NASA Goddard Space Flight Center, Greenbelt, Md
The new measurement is a high-precision count of two forms of argon gas -- Argon-36 and Argon-38-accomplished by the Sample Analysis at Mars (SAM) instrument on Curiosity. These lighter and heavier forms, or isotopes, of argon exist naturally throughout the solar system. But on Mars the ratio of light to heavy argon is skewed because a lot of that planet's original atmosphere was lost to space, with the lighter form of argon being taken away more readily because it rises to the top of the atmosphere more easily and requires less energy to escape. That's left the Martian atmosphere relatively enriched in the heavier Argon-38.
Years of past analyses by Earth-bound scientists of gas bubbles trapped inside Martian meteorites had already narrowed the Martian argon ratio to between 3.6 and 4.5 (that is 3.6 to 4.5 atoms of Argon-36 to every one Argon-38) with the supposed Martian "atmospheric" value near four. Measurements by NASA's Viking landers in the 1970's put the Martian atmospheric ratio in the range of four to seven. The new SAM direct measurement on Mars now pins down the correct argon ratio at 4.2.
"We really nailed it," said Sushil Atreya of the University of Michigan, Ann Arbor, the lead author of a paper reporting the finding today in Geophysical Research Letters, a journal of the American Geophysical Union. "This direct reading from Mars settles the case with all Martian meteorites," he said.
One of the reasons scientists have been so interested in the argon ratio in Martian meteorites is that it was -- before Curiosity -- the best measure of how much atmosphere Mars has lost since the planet's earlier, wetter, warmer days billions of years ago. Figuring out the planet's atmospheric loss would enable scientists to better understand how Mars transformed from a once water-rich planet more like our own to the today's drier, colder and less hospitable world.
Had Mars held onto its entire atmosphere and its original argon, Atreya explained, its ratio of the gas would be the same as that of the Sun and Jupiter. They have so much gravity that isotopes can't preferentially escape, so their argon ratio -- which is 5.5 -- represents that of the primordial solar system.
While argon comprises only a tiny fraction of the gases lost to space from Mars, it is special because it's a noble gas. That means the gas is inert, not reacting with other elements or compounds, and therefore a more straightforward tracer of the history of the Martian atmosphere.
"Other isotopes measured by SAM on Curiosity also support the loss of atmosphere, but none so directly as argon," said Atreya. "Argon is the clearest signature of atmospheric loss because it's chemically inert and does not interact or exchange with the Martian surface or the interior. This was a key measurement that we wanted to carry out on SAM."
NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Curiosity mission for NASA's Science Mission Directorate, Washington. The SAM investigation on the rover is managed by NASA Goddard Space Flight Center, Greenbelt, Md
Electrical System of the Heart
Posted Under:
See where the pacemaker cells start the electrical wave of depolarization, and how it gets all the way to the ventricles of the heart. Rishi is a pediatric infectious disease physician and works at Khan Academy. These videos do not provide medical advice and are for informational purposes only. The videos are not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of a qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read or seen in any Khan Academy video!
Lub Dub
Posted Under:
The Heart bereaths "Lub Dub"!Ever wonder why the heart sounds the way that it does? Opening and closing of heart valves makes the heart rhythm come alive with its lub dub beats... Rishi is a pediatric infectious disease physician and works at Khan Academy. These videos do not provide medical advice and are for informational purposes only. The videos are not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of a qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read or seen in any Khan Academy video
Flow through the Heart
Posted Under:
Flow through the Heart say about...Learn how blood flows through the heart, and understand the difference between systemic and pulmonary blood flow. Rishi is a pediatric infectious disease physician and works at Khan Academy. These videos do not provide medical advice and are for informational purposes only. The videos are not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of a qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read or seen in any Khan Academy video
Blood circulation in the heart
Posted Under:
Blood circulation in the heart! Says about how the blood circle was helding in the heart and our body.
Human heart
Posted Under:
A brief overview of the structure and function of the human heart and circulatory system. You can watch more parts of this topic here....
Researchers Sequence Non-Infiltrating Bladder Cancer Exome
Posted Under:
Bladder cancer represents a serious public health problem in many countries, especially in Spain, where 11,200 new cases are recorded every year, one of the highest rates in the world. The majority of these tumours have a good prognosis -- 70-80% five-year survival after diagnosis -- and they do not infiltrate the bladder muscle at the time of diagnosis -- in around 80% of cases.
Despite this, many of the tumours recur, requiring periodic cytoscopic tumour surveillance. This type of follow-up affects patients' quality of life, at the same time as incurring significant healthcare costs.
Researchers at the Spanish National Cancer Research Centre (CNIO), coordinated by Francisco X. Real, head of the Epithelial Carcinogenesis Group and Nuria Malats, head of the Genetic & Molecular Epidemiology Group, have carried out the first exome sequencing for non-infiltrating bladder cancer, the most frequent type of bladder cancer and the one with the highest risk of recurrence (the exome is the part of the genome that contains protein synthesis information).
The results reveal new genetic pathways involved in the disease, such as cellular division and DNA repair, as well as new genes -- not previously described -- that might be crucial for understanding its origin and evolution.
"We know very little about the biology of bladder cancer, which would be useful for classifying patients, predicting relapses and even preventing the illness," says Cristina Balbás, a predoctoral researcher in Real's laboratory who is the lead author of the study.
The work consisted of analysing the exome from 17 patients diagnosed with bladder cancer and subsequently validating the data via the study of a specific group of genes in 60 additional patients.
"We found up to 9 altered genes that hadn't been described before in this type of tumour, and of these we found that STAG2 was inactive in almost 40% of the least aggressive tumours," says Real.
The researcher adds that: "Some of these genes are involved in previously undescribed genetic pathways in bladder cancer, such as cell division and DNA repair; also, we confirmed and extended other genetic pathways that had previously been described in this cancer type, such as chromatin remodelling."
An Unknown Agent in Bladder Cancer
The STAG2 gene has been associated with cancer just over 2 years ago, although "little is known about it, and nothing about its relationship to bladder cancer," says Balbás. Previous studies suggest it participates in chromosome separation during cell division (chromosomes contain the genetic material), which is where it might be related to cancer, although it has also been associated with maintenance of DNA´s 3D structure or in gene regulation.
Contrary to what might be expected, the article reveals that tumours with an alteration in this gene frequently lack changes in the number of chromosomes, which indicates, according to Real, that "this gene participates in bladder cancer via different mechanisms than chromosome separation."
The authors have also found, by analysising tumour tissue from more than 670 patients, that alterations in STAG2 are associated, above all, with tumours from patients with a better prognosis. How and why these phenomena work still needs to be discovered but the researchers predict that "mutations in STAG2 and other additional genes that we showed to be altered could provide new therapeutic opportunities in some patient sub-groups."
Despite this, many of the tumours recur, requiring periodic cytoscopic tumour surveillance. This type of follow-up affects patients' quality of life, at the same time as incurring significant healthcare costs.
Researchers at the Spanish National Cancer Research Centre (CNIO), coordinated by Francisco X. Real, head of the Epithelial Carcinogenesis Group and Nuria Malats, head of the Genetic & Molecular Epidemiology Group, have carried out the first exome sequencing for non-infiltrating bladder cancer, the most frequent type of bladder cancer and the one with the highest risk of recurrence (the exome is the part of the genome that contains protein synthesis information).
The results reveal new genetic pathways involved in the disease, such as cellular division and DNA repair, as well as new genes -- not previously described -- that might be crucial for understanding its origin and evolution.
"We know very little about the biology of bladder cancer, which would be useful for classifying patients, predicting relapses and even preventing the illness," says Cristina Balbás, a predoctoral researcher in Real's laboratory who is the lead author of the study.
The work consisted of analysing the exome from 17 patients diagnosed with bladder cancer and subsequently validating the data via the study of a specific group of genes in 60 additional patients.
"We found up to 9 altered genes that hadn't been described before in this type of tumour, and of these we found that STAG2 was inactive in almost 40% of the least aggressive tumours," says Real.
The researcher adds that: "Some of these genes are involved in previously undescribed genetic pathways in bladder cancer, such as cell division and DNA repair; also, we confirmed and extended other genetic pathways that had previously been described in this cancer type, such as chromatin remodelling."
An Unknown Agent in Bladder Cancer
The STAG2 gene has been associated with cancer just over 2 years ago, although "little is known about it, and nothing about its relationship to bladder cancer," says Balbás. Previous studies suggest it participates in chromosome separation during cell division (chromosomes contain the genetic material), which is where it might be related to cancer, although it has also been associated with maintenance of DNA´s 3D structure or in gene regulation.
Contrary to what might be expected, the article reveals that tumours with an alteration in this gene frequently lack changes in the number of chromosomes, which indicates, according to Real, that "this gene participates in bladder cancer via different mechanisms than chromosome separation."
The authors have also found, by analysising tumour tissue from more than 670 patients, that alterations in STAG2 are associated, above all, with tumours from patients with a better prognosis. How and why these phenomena work still needs to be discovered but the researchers predict that "mutations in STAG2 and other additional genes that we showed to be altered could provide new therapeutic opportunities in some patient sub-groups."
In a Surprise Finding, Gene Mutation Found Linked to Low-Risk Bladder Cancer
Posted Under:
An international research team led by scientists from Georgetown Lombardi Comprehensive Cancer Center has discovered a genetic mutation linked to low-risk bladder cancer. Their findings are reported online today in Nature Genetics.
The investigators identified STAG2 as one of the most commonly mutated genes in bladder cancer, particularly in tumors that do not spread. The finding suggests that checking the status of the gene may help identify patients who might do unusually well following cancer treatment, says the study's senior investigator, cancer geneticist Todd Waldman, MD, PhD, a professor of oncology at Georgetown Lombardi.
"Most bladder cancers are superficial tumors that have not spread to other parts of the body, and can therefore be easily treated and cured. However, a small fraction of these superficial tumors will recur and metastasize even after treatment," he says.
Because clinicians have been unable to definitively identify those potentially lethal cancers, all bladder cancers patients -- after surgery to remove tumors -- must undergo frequent endoscopic examinations of their bladder to look for signs of recurrence, says Waldman. This procedure, called cystoscopy, can be uncomfortable and is expensive.
"Our data show that STAG2 is one of the earliest initiating gene mutations in 30-40 percent of superficial or 'papillary-type' bladder tumors, and that these tumors are unlikely to recur," says David Solomon, MD, PhD, a lead author on the study. Solomon is a graduate of the Georgetown MD/PhD program and is currently a pathology resident at the University of California, San Francisco.
"We have developed a simple test for pathologists to easily assess the STAG2 status of these tumors, and are currently performing a larger study to determine if this test should enter routine clinical use for predicting the likelihood that a superficial bladder cancer will recur," Solomon says.
For the study, the researchers examined 2,214 human tumors from virtually all sites of the human body for STAG2 inactivation and found that STAG2 was most commonly inactivated in bladder cancer, the fifth most common human cancer. In follow up work, they found that 36 percent of low risk bladder cancers -- those that never invaded the bladder muscle or progressed -- had mutated STAG2. That suggests that testing the STAG2 status of the cancer could help guide clinical care, Waldman says. "A positive STAG2 mutation could mean that patient is at lower risk of recurrence."
The researchers also found that 16 percent of the bladder cancers that did spread, or metastasize, had mutated STAG2.
STAG2 mutations have been found in a number of cancers, and this finding in bladder cancer adds new information, he says.
Astrophysicists have found the first evidence of a water-rich rocky .....
Posted Under:
Astrophysicists have found the first evidence of a water-rich rocky planetary body outside our solar system in its shattered remains orbiting a white dwarf.
A new study by scientists at the Universities of Warwick and Cambridge published in the journal Science analysed the dust and debris surrounding the white dwarf star GD61 170 light years away.
Using observations obtained with the Hubble Space Telescope and the large Keck telescope on Hawaii, they found an excess of oxygen -- a chemical signature that indicates that the debris had once been part of a bigger body originally composed of 26 per cent water by mass. By contrast, only approximately 0.023 per cent of Earth's mass is water.
Evidence for water outside our solar system has previously been found in the atmosphere of gas giants, but this study marks the first time it has been pinpointed in a rocky body, making it of significant interest in our understanding of the formation and evolution of habitable planets and life.
We know from our own solar system that the dwarf planet Ceres contains ice buried beneath an outer crust, and the researchers draw a parallel between the two bodies. Scientists believe that bodies like Ceres were the source of the bulk of our own water on Earth.
The researchers suggest it is most likely that the water detected around the white dwarf GD 61 came from a minor planet at least 90 km in diameter but potentially much bigger, that once orbited the parent star before it became a white dwarf.
Like Ceres, the water was most likely in the form of ice below the planet's surface. From the amount of rocks and water detected in the outer envelope of the white dwarf, the researchers estimate that the disrupted planetary body had a diameter of at least 90km.
However, because their observations can only detect what is being accreted in recent history, the estimate of its mass is on the conservative side.
It is likely that the object was as large as Vesta, the largest minor planet in the solar system. In its former life, GD 61 was a star somewhat bigger than our Sun, and host to a planetary system.
About 200 million years ago, GD 61 entered its death throes and became a white dwarf, yet, parts of its planetary system survived. The water-rich minor planet was knocked out of its regular orbit and plunged into a very close orbit, where it was shredded by the star's gravitational force. The researchers believe that destabilising the orbit of the minor planet requires a so far unseen, much larger planet going around the white dwarf.
Professor Boris Gänsicke of the Department of Physics at the University of Warwick "At this stage in its existence, all that remains of this rocky body is simply dust and debris that has been pulled into the orbit of its dying parent star.
"However this planetary graveyard swirling around the embers of its parent star is a rich source of information about its former life. "In these remnants lie chemical clues which point towards a previous existence as a water-rich terrestrial body.
"Those two ingredients -- a rocky surface and water -- are key in the hunt for habitable planets outside our solar system so it's very exciting to find them together for the first time outside our solar system."
Lead author Jay Farihi, from Cambridge's Institute of Astronomy, said: "The finding of water in a large asteroid means the building blocks of habitable planets existed -- and maybe still exist -- in the GD 61 system, and likely also around substantial number of similar parent stars.
"These water-rich building blocks, and the terrestrial planets they build, may in fact be common -- a system cannot create things as big as asteroids and avoid building planets, and GD 61 had the ingredients to deliver lots of water to their surfaces," Farihi said.
"Our results demonstrate that there was definitely potential for habitable planets in this exoplanetary system."
For their analysis , the researchers used ultraviolet spectroscopy data obtained with the Cosmic Origins Spectrograph on board the Hubble Space Telescope of the white dwarf GD 61. As the atmosphere of Earth blocks the ultraviolet light, such study can only be carried out from space.
Additional observations were obtained with the 10m large mirror of the W.M. Keck Observatory on Mauna Kea on Hawaii.
The Hubble and Keck data allows the researchers to identify the different chemical elements that are polluting the outer layers white dwarf. Using a sophisticated computer model of the white dwarf atmosphere, developed by Detlev Koester from the University of Kiel, they can then infer the chemical composition of the shredded minor planet.
To date observations of 12 destroyed exoplanets orbiting white dwarves have been carried out, but this is the first time the signature of water has been found.
A new study by scientists at the Universities of Warwick and Cambridge published in the journal Science analysed the dust and debris surrounding the white dwarf star GD61 170 light years away.
Using observations obtained with the Hubble Space Telescope and the large Keck telescope on Hawaii, they found an excess of oxygen -- a chemical signature that indicates that the debris had once been part of a bigger body originally composed of 26 per cent water by mass. By contrast, only approximately 0.023 per cent of Earth's mass is water.
Evidence for water outside our solar system has previously been found in the atmosphere of gas giants, but this study marks the first time it has been pinpointed in a rocky body, making it of significant interest in our understanding of the formation and evolution of habitable planets and life.
We know from our own solar system that the dwarf planet Ceres contains ice buried beneath an outer crust, and the researchers draw a parallel between the two bodies. Scientists believe that bodies like Ceres were the source of the bulk of our own water on Earth.
The researchers suggest it is most likely that the water detected around the white dwarf GD 61 came from a minor planet at least 90 km in diameter but potentially much bigger, that once orbited the parent star before it became a white dwarf.
Like Ceres, the water was most likely in the form of ice below the planet's surface. From the amount of rocks and water detected in the outer envelope of the white dwarf, the researchers estimate that the disrupted planetary body had a diameter of at least 90km.
However, because their observations can only detect what is being accreted in recent history, the estimate of its mass is on the conservative side.
It is likely that the object was as large as Vesta, the largest minor planet in the solar system. In its former life, GD 61 was a star somewhat bigger than our Sun, and host to a planetary system.
About 200 million years ago, GD 61 entered its death throes and became a white dwarf, yet, parts of its planetary system survived. The water-rich minor planet was knocked out of its regular orbit and plunged into a very close orbit, where it was shredded by the star's gravitational force. The researchers believe that destabilising the orbit of the minor planet requires a so far unseen, much larger planet going around the white dwarf.
Professor Boris Gänsicke of the Department of Physics at the University of Warwick "At this stage in its existence, all that remains of this rocky body is simply dust and debris that has been pulled into the orbit of its dying parent star.
"However this planetary graveyard swirling around the embers of its parent star is a rich source of information about its former life. "In these remnants lie chemical clues which point towards a previous existence as a water-rich terrestrial body.
"Those two ingredients -- a rocky surface and water -- are key in the hunt for habitable planets outside our solar system so it's very exciting to find them together for the first time outside our solar system."
Lead author Jay Farihi, from Cambridge's Institute of Astronomy, said: "The finding of water in a large asteroid means the building blocks of habitable planets existed -- and maybe still exist -- in the GD 61 system, and likely also around substantial number of similar parent stars.
"These water-rich building blocks, and the terrestrial planets they build, may in fact be common -- a system cannot create things as big as asteroids and avoid building planets, and GD 61 had the ingredients to deliver lots of water to their surfaces," Farihi said.
"Our results demonstrate that there was definitely potential for habitable planets in this exoplanetary system."
For their analysis , the researchers used ultraviolet spectroscopy data obtained with the Cosmic Origins Spectrograph on board the Hubble Space Telescope of the white dwarf GD 61. As the atmosphere of Earth blocks the ultraviolet light, such study can only be carried out from space.
Additional observations were obtained with the 10m large mirror of the W.M. Keck Observatory on Mauna Kea on Hawaii.
The Hubble and Keck data allows the researchers to identify the different chemical elements that are polluting the outer layers white dwarf. Using a sophisticated computer model of the white dwarf atmosphere, developed by Detlev Koester from the University of Kiel, they can then infer the chemical composition of the shredded minor planet.
To date observations of 12 destroyed exoplanets orbiting white dwarves have been carried out, but this is the first time the signature of water has been found.
A novel device that uses only sunlight and wastewater to produce
Posted Under:
A novel device that uses only sunlight and wastewater to produce hydrogen gas could provide a sustainable energy source while improving the efficiency of wastewater treatment.
A research team led by Yat Li, associate professor of chemistry at the University of California, Santa Cruz, developed the solar-microbial device and reported their results in a paper published in the American Chemical Society journal ACS Nano. The hybrid device combines a microbial fuel cell (MFC) and a type of solar cell called a photoelectrochemical cell (PEC). In the MFC component, bacteria degrade organic matter in the wastewater, generating electricity in the process. The biologically generated electricity is delivered to the PEC component to assist the solar-powered splitting of water (electrolysis) that generates hydrogen and oxygen.
Either a PEC or MFC device can be used alone to produce hydrogen gas. Both, however, require a small additional voltage (an "external bias") to overcome the thermodynamic energy barrier for proton reduction into hydrogen gas. The need to incorporate an additional electric power element adds significantly to the cost and complication of these types of energy conversion devices, especially at large scales. In comparison, Li's hybrid solar-microbial device is self-driven and self-sustained, because the combined energy from the organic matter (harvested by the MFC) and sunlight (captured by the PEC) is sufficient to drive electrolysis of water.
In effect, the MFC component can be regarded as a self-sustained "bio-battery" that provides extra voltage and energy to the PEC for hydrogen gas generation. "The only energy sources are wastewater and sunlight," Li said. "The successful demonstration of such a self-biased, sustainable microbial device for hydrogen generation could provide a new solution that can simultaneously address the need for wastewater treatment and the increasing demand for clean energy."
Microbial fuel cells rely on unusual bacteria, known as electrogenic bacteria, that are able to generate electricity by transferring metabolically-generated electrons across their cell membranes to an external electrode. Li's group collaborated with researchers at Lawrence Livermore National Laboratory (LLNL) who have been studying electrogenic bacteria and working to enhance MFC performance. Initial "proof-of-concept" tests of the solar-microbial (PEC-MFC) device used a well-studied strain of electrogenic bacteria grown in the lab on artificial growth medium. Subsequent tests used untreated municipal wastewater from the Livermore Water Reclamation Plant. The wastewater contained both rich organic nutrients and a diverse mix of microbes that feed on those nutrients, including naturally occurring strains of electrogenic bacteria.
When fed with wastewater and illuminated in a solar simulator, the PEC-MFC device showed continuous production of hydrogen gas at an average rate of 0.05 m3/day, according to LLNL researcher and coauthor Fang Qian. At the same time, the turbid black wastewater became clearer. The soluble chemical oxygen demand--a measure of the amount of organic compounds in water, widely used as a water quality test--declined by 67 percent over 48 hours.
The researchers also noted that hydrogen generation declined over time as the bacteria used up the organic matter in the wastewater. Replenishment of the wastewater in each feeding cycle led to complete restoration of electric current generation and hydrogen gas production.
Qian said the researchers are optimistic about the commercial potential for their invention. Currently they are planning to scale up the small laboratory device to make a larger 40-liter prototype continuously fed with municipal wastewater. If results from the 40-liter prototype are promising, they will test the device on site at the wastewater treatment plant.
"The MFC will be integrated with the existing pipelines of the plant for continuous wastewater feeding, and the PEC will be set up outdoors to receive natural solar illumination," Qian said.
"Fortunately, the Golden State is blessed with abundant sunlight that can be used for the field test," Li added.
Qian and Hanyu Wang, a graduate student in Li's lab at UC Santa Cruz, are co-first authors of the ACS Nano paper. The other coauthors include UCSC graduate student Gongming Wang; LLNL researcher Yongqin Jiao; and Zhen He of Virginia Polytechnic Institute & State University. This research was supported by the National Science Foundation and Department of Energy.
A research team led by Yat Li, associate professor of chemistry at the University of California, Santa Cruz, developed the solar-microbial device and reported their results in a paper published in the American Chemical Society journal ACS Nano. The hybrid device combines a microbial fuel cell (MFC) and a type of solar cell called a photoelectrochemical cell (PEC). In the MFC component, bacteria degrade organic matter in the wastewater, generating electricity in the process. The biologically generated electricity is delivered to the PEC component to assist the solar-powered splitting of water (electrolysis) that generates hydrogen and oxygen.
Either a PEC or MFC device can be used alone to produce hydrogen gas. Both, however, require a small additional voltage (an "external bias") to overcome the thermodynamic energy barrier for proton reduction into hydrogen gas. The need to incorporate an additional electric power element adds significantly to the cost and complication of these types of energy conversion devices, especially at large scales. In comparison, Li's hybrid solar-microbial device is self-driven and self-sustained, because the combined energy from the organic matter (harvested by the MFC) and sunlight (captured by the PEC) is sufficient to drive electrolysis of water.
In effect, the MFC component can be regarded as a self-sustained "bio-battery" that provides extra voltage and energy to the PEC for hydrogen gas generation. "The only energy sources are wastewater and sunlight," Li said. "The successful demonstration of such a self-biased, sustainable microbial device for hydrogen generation could provide a new solution that can simultaneously address the need for wastewater treatment and the increasing demand for clean energy."
Microbial fuel cells rely on unusual bacteria, known as electrogenic bacteria, that are able to generate electricity by transferring metabolically-generated electrons across their cell membranes to an external electrode. Li's group collaborated with researchers at Lawrence Livermore National Laboratory (LLNL) who have been studying electrogenic bacteria and working to enhance MFC performance. Initial "proof-of-concept" tests of the solar-microbial (PEC-MFC) device used a well-studied strain of electrogenic bacteria grown in the lab on artificial growth medium. Subsequent tests used untreated municipal wastewater from the Livermore Water Reclamation Plant. The wastewater contained both rich organic nutrients and a diverse mix of microbes that feed on those nutrients, including naturally occurring strains of electrogenic bacteria.
When fed with wastewater and illuminated in a solar simulator, the PEC-MFC device showed continuous production of hydrogen gas at an average rate of 0.05 m3/day, according to LLNL researcher and coauthor Fang Qian. At the same time, the turbid black wastewater became clearer. The soluble chemical oxygen demand--a measure of the amount of organic compounds in water, widely used as a water quality test--declined by 67 percent over 48 hours.
The researchers also noted that hydrogen generation declined over time as the bacteria used up the organic matter in the wastewater. Replenishment of the wastewater in each feeding cycle led to complete restoration of electric current generation and hydrogen gas production.
Qian said the researchers are optimistic about the commercial potential for their invention. Currently they are planning to scale up the small laboratory device to make a larger 40-liter prototype continuously fed with municipal wastewater. If results from the 40-liter prototype are promising, they will test the device on site at the wastewater treatment plant.
"The MFC will be integrated with the existing pipelines of the plant for continuous wastewater feeding, and the PEC will be set up outdoors to receive natural solar illumination," Qian said.
"Fortunately, the Golden State is blessed with abundant sunlight that can be used for the field test," Li added.
Qian and Hanyu Wang, a graduate student in Li's lab at UC Santa Cruz, are co-first authors of the ACS Nano paper. The other coauthors include UCSC graduate student Gongming Wang; LLNL researcher Yongqin Jiao; and Zhen He of Virginia Polytechnic Institute & State University. This research was supported by the National Science Foundation and Department of Energy.
Spirit Science 18 ~ The Four Elements
Posted Under:
Were going to keep moving forward on our "Geometry Lessons" this week, and explore the basis of the Four Elements. Were also going to look at the basic systems of early Numerology in Sacred Geometry. 1's, 2's, 3's, 4's, and so on. (Which is very connected to the 4 elements as well). We also look at Art as an expression, and what it means to say "Art will change the world". Looking at Art is no longer enough. We are learning to BE art, be the creation that you want to express. Such is the nature of the transformation we are going through today.
Spirit Science 17 ~ Universal Geometry
Posted Under:
Lets get back into some geometry, shall we? I know last week I said we were going to do a whole bunch of short mailbag questions, but I got really inspired and somehow churned this bad-boy out instead! This week, we take a step forward to basics and look at the Universal Geometry behind all things, an expansion episode from Lesson 6 - The Flower of Life. From here, we'll be able to dive into topics such as the four elements, and connect some other discussions that we have previously discussed before to. Please help by sharing this video, and opening up more public discussion about Cosmic Geometry in your every day life
Spirit Science 16 ~ The Shift of Ages
Posted Under:
One of the most frequently asked questions we get asked is "What do you think will happen with 2012?". We thought the best way to shed light on the situation was simply to make a video about it! So without further ado, here is our 2012 video! Thank you so much to everyone who helped make this video possible! Put together by SpiritPatch Additional stuff by The King of Atlantis! Music by Matteo Penna Credits - A live video of Elijah and the Band of Light (Seriously check this guy out, his music is pure love and light!) Were looking for guest artists on Spirit Science! If you have a drawing tablet and want to help out mail to - jenakan92@gmail.com / jenakan2.2@gmail.com
Spirit Science 15 ~ Power of the Heart
Posted Under:
As science looks more and more at the human body with greater technology, we have begun to come full circle in understanding what the ancients knew about the heart, the brain, and divine consciousness. What is the heart? Is it more than just a pump for blood? Or could the truth about its power be related to the essence of your entire being, and have a field of energy so great that it can transform not just your own being into that of light, love and happiness, but even those around you. What is the shining light of the heart, and how can you access it and gain the inner knowing of who you are and why you are here? Produced by Spirit Science Animated by Spirit Patch Colouring by The Atlantis King Music by Oliver Gregory, Jahon Mikal, Tilkanauts, and Matteo Penna. Thank you so much for sending us music to use in these videos!! This video was produced out of love for you and the entire human species, please help us share it with as many people as possible to maximize human ascension and get the information to as many people as possible
Climate Puzzle Over Origins of Life On Earth
Posted Under:
The mystery of why life on Earth evolved when it did has deepened with the publication of a new study in the latest edition of the journal Science.
Scientists at the CRPG-CNRS University of Lorraine, The University of Manchester and the Institut de Physique du Globe de Paris have ruled out a theory as to why the planet was warm enough to sustain the planet's earliest life forms when the Sun's energy was roughly three-quarters the strength it is today.
Life evolved on Earth during the Archean, between 3.8 and 2.4 billion years ago, but the weak Sun should have meant the planet was too cold for life to take hold at this time; scientists have therefore been trying to find an explanation for this conundrum, what is dubbed the 'faint, young Sun paradox'.
"During the Archean the solar energy received at the surface of the Earth was about 20 to 25 % lower than present," said study author, Dr Ray Burgess, from Manchester's School of Earth, Atmospheric and Environmental Sciences. "If the greenhouse gas composition of the atmosphere was comparable to current levels then the Earth should have been permanently glaciated but geological evidence suggests there were no global glaciations before the end of the Archean and that liquid water was widespread."
One explanation for the puzzle was that greenhouse gas levels -- one of the regulators of Earth's climate -- were significantly higher during the Archean than they are today.
"To counter the effect of the weaker Sun, carbon dioxide concentrations in the Earth's atmosphere would need to have been 1,000 times higher than present," said lead author Professor Bernard Marty, from the CRPG-CNRS University of Lorraine. "However, ancient fossil soils -- the best indicators of ancient carbon dioxide levels in the atmosphere -- suggest only modest levels during the Archean. Other atmospheric greenhouse gases were also present, in particular ammonia and methane, but these gases are fragile and easily destroyed by ultraviolet solar radiation, so are unlikely to have had any effect."
But another climate-warming theory -- one the team wanted to test -- is that the amount of nitrogen could have been higher in the ancient atmosphere, which would amplify the greenhouse effect of carbon dioxide and allow Earth to remain ice-free.
The team analysed tiny samples of air trapped in water bubbles in quartz from a region of northern Australia that has extremely old and exceptionally well-preserved rocks.
"We measured the amount and isotopic abundances of nitrogen and argon in the ancient air," said Professor Marty. "Argon is a noble gas which, being chemically inert, is an ideal element to monitor atmospheric change. Using the nitrogen and argon measurements we were able to reconstruct the amount and isotope composition of the nitrogen dissolved in the water and, from that, the atmosphere that was once in equilibrium with the water."
The researchers found that the partial pressure of nitrogen in the Archean atmosphere was similar, possibly even slightly lower, than it is at present, ruling out nitrogen as one of the main contenders for solving the early climate puzzle.
Dr Burgess added: "The amount of nitrogen in the atmosphere was too low to enhance the greenhouse effect of carbon dioxide sufficiently to warm the planet. However, our results did give a higher than expected pressure reading for carbon dioxide -- at odds with the estimates based on fossil soils -- which could be high enough to counteract the effects of the faint young Sun and will require further investigation."
Scientists at the CRPG-CNRS University of Lorraine, The University of Manchester and the Institut de Physique du Globe de Paris have ruled out a theory as to why the planet was warm enough to sustain the planet's earliest life forms when the Sun's energy was roughly three-quarters the strength it is today.
Life evolved on Earth during the Archean, between 3.8 and 2.4 billion years ago, but the weak Sun should have meant the planet was too cold for life to take hold at this time; scientists have therefore been trying to find an explanation for this conundrum, what is dubbed the 'faint, young Sun paradox'.
"During the Archean the solar energy received at the surface of the Earth was about 20 to 25 % lower than present," said study author, Dr Ray Burgess, from Manchester's School of Earth, Atmospheric and Environmental Sciences. "If the greenhouse gas composition of the atmosphere was comparable to current levels then the Earth should have been permanently glaciated but geological evidence suggests there were no global glaciations before the end of the Archean and that liquid water was widespread."
One explanation for the puzzle was that greenhouse gas levels -- one of the regulators of Earth's climate -- were significantly higher during the Archean than they are today.
"To counter the effect of the weaker Sun, carbon dioxide concentrations in the Earth's atmosphere would need to have been 1,000 times higher than present," said lead author Professor Bernard Marty, from the CRPG-CNRS University of Lorraine. "However, ancient fossil soils -- the best indicators of ancient carbon dioxide levels in the atmosphere -- suggest only modest levels during the Archean. Other atmospheric greenhouse gases were also present, in particular ammonia and methane, but these gases are fragile and easily destroyed by ultraviolet solar radiation, so are unlikely to have had any effect."
But another climate-warming theory -- one the team wanted to test -- is that the amount of nitrogen could have been higher in the ancient atmosphere, which would amplify the greenhouse effect of carbon dioxide and allow Earth to remain ice-free.
The team analysed tiny samples of air trapped in water bubbles in quartz from a region of northern Australia that has extremely old and exceptionally well-preserved rocks.
"We measured the amount and isotopic abundances of nitrogen and argon in the ancient air," said Professor Marty. "Argon is a noble gas which, being chemically inert, is an ideal element to monitor atmospheric change. Using the nitrogen and argon measurements we were able to reconstruct the amount and isotope composition of the nitrogen dissolved in the water and, from that, the atmosphere that was once in equilibrium with the water."
The researchers found that the partial pressure of nitrogen in the Archean atmosphere was similar, possibly even slightly lower, than it is at present, ruling out nitrogen as one of the main contenders for solving the early climate puzzle.
Dr Burgess added: "The amount of nitrogen in the atmosphere was too low to enhance the greenhouse effect of carbon dioxide sufficiently to warm the planet. However, our results did give a higher than expected pressure reading for carbon dioxide -- at odds with the estimates based on fossil soils -- which could be high enough to counteract the effects of the faint young Sun and will require further investigation."
Astronomers Discover Large 'Hot' Cocoon Around a Small Baby Star
Posted Under:
An international research team, led by researcher at the University of Electro-Communication observed an infrared dark cloud G34.43+00.24 MM3 with ALMA and discovered a baby star surrounded by a large hot cloud. This hot cloud is about ten times larger than those found around typical solar-mass baby stars.
Hot molecular clouds around new-born stars are called "Hot Cores" and have temperature of -- 160 degrees Celsius, 100 degrees hotter than normal molecular clouds. The large size of the hot core discovered by ALMA shows that much more energy is emitted from the central baby star than typical solar-mass young stars. This may be due to the higher mass infall rate, or multiplicity of the central baby star. This result indicates a large diversity in the star formation process.
The research findings are presented in the article "ALMA Observations of the IRDC Clump G34.43+00.24 MM3: Hot Core and Molecular Outflows," published in the Astrophysical Journal, Vol. 775, of September 20, 2013.
A large hot molecular cloud around a very young star was discovered by ALMA. This hot cloud is about ten times larger than those found around typical solar-mass baby stars, which indicates that the star formation process has more diversity than ever thought. This result was published in the Astrophysical Journal on September 20th, 2013.
Stars are formed in very cold (-260 degrees Celsius) gas and dust clouds. Infrared Dark Clouds (IRDC) are dense regions of such clouds, and thought that in which clusters of stars are formed. Since most of stars are born as members of star clusters, investigating IRDCs has a crucial role in comprehensive understanding the star formation process.
A baby star is surrounded by the natal gas and dust cloud, and the cloud is warmed up from its center. Temperature of the central part of some, but not all, of such clouds reaches as high as -160 degrees Celsius. Astronomers call those clouds as "hot core" -- it may not be hot on Earth, but is hot enough for a cosmic cloud. Inside hot cores, various molecules, originally trapped in the ice mantle around dust particles, are sublimated. Organic molecules such as methanol (CH3OH), ethyl cyanide (CH3CH2CN), and methyl formate (HCOOCH3) are abundant in hot cores.
International research team, led by Takeshi Sakai at the University of Electro-Communication, Japan, used ALMA to observe an IRDC named G34.43+00.24 MM3 (hereafter MM3) in the constellation Aquila (the Eagle). They discovered a young object from which the methanol molecular line is strongly emitted. A detailed investigation tells them that the temperature of the methanol gas is -140 degrees Celsius. This shows that MM3 harbors a baby star surrounded by a hot core. The size of the hot core is as large as 800 times 300 astronomical units (au, 1 au equals to the mean distance of the Sun and Earth; 150 million km). Typical size of hot cores around low-mass young stars is several tens to hundred of au, therefore the hot core in MM3 is exceptionally large. Sakai says "Thanks to the high sensitivity and spatial resolution, we need only a few hours to discover a previously unknown baby star. This is an important step to understand the star formation process in a cluster forming region."
The team also observed radio emission from carbon sulfide (CS) and silicon monoxide (SiO) to reveal the detailed structure of the molecular outflow from the baby star. The speed of the emanated gas is 28 km/s and the extent is 4,400 au. Based on these values, the team calculates the age of the outflow of only 740 years. Although molecular outflows are common features around protostars, the outflow as young as the one in MM3 is quite rare. In summary, ALMA finds that the protostar in MM3 is very young but has a giant hot core.
Why the hot core in MM3 is so large? In order to warm up the large volume of gas, the baby star should emit much more energy than typical ones. Protostars produce emission by converting the gravitational energy of infalling material to the thermal energy. The large size of the hot core in MM3 is possibly due to the high mass infalling rate than ever thought. The other possibility is that two or more protostars are embedded in the hot core. The research team has not reached the reason with this observation yet. "ALMA's spatial resolution improves much more in the near future," Sakai says, "Then much detail of the infalling material toward the protostar can be revealed, and it helps us answer to the mystery behind the diversity in star formation."
Hot molecular clouds around new-born stars are called "Hot Cores" and have temperature of -- 160 degrees Celsius, 100 degrees hotter than normal molecular clouds. The large size of the hot core discovered by ALMA shows that much more energy is emitted from the central baby star than typical solar-mass young stars. This may be due to the higher mass infall rate, or multiplicity of the central baby star. This result indicates a large diversity in the star formation process.
The research findings are presented in the article "ALMA Observations of the IRDC Clump G34.43+00.24 MM3: Hot Core and Molecular Outflows," published in the Astrophysical Journal, Vol. 775, of September 20, 2013.
A large hot molecular cloud around a very young star was discovered by ALMA. This hot cloud is about ten times larger than those found around typical solar-mass baby stars, which indicates that the star formation process has more diversity than ever thought. This result was published in the Astrophysical Journal on September 20th, 2013.
Stars are formed in very cold (-260 degrees Celsius) gas and dust clouds. Infrared Dark Clouds (IRDC) are dense regions of such clouds, and thought that in which clusters of stars are formed. Since most of stars are born as members of star clusters, investigating IRDCs has a crucial role in comprehensive understanding the star formation process.
A baby star is surrounded by the natal gas and dust cloud, and the cloud is warmed up from its center. Temperature of the central part of some, but not all, of such clouds reaches as high as -160 degrees Celsius. Astronomers call those clouds as "hot core" -- it may not be hot on Earth, but is hot enough for a cosmic cloud. Inside hot cores, various molecules, originally trapped in the ice mantle around dust particles, are sublimated. Organic molecules such as methanol (CH3OH), ethyl cyanide (CH3CH2CN), and methyl formate (HCOOCH3) are abundant in hot cores.
International research team, led by Takeshi Sakai at the University of Electro-Communication, Japan, used ALMA to observe an IRDC named G34.43+00.24 MM3 (hereafter MM3) in the constellation Aquila (the Eagle). They discovered a young object from which the methanol molecular line is strongly emitted. A detailed investigation tells them that the temperature of the methanol gas is -140 degrees Celsius. This shows that MM3 harbors a baby star surrounded by a hot core. The size of the hot core is as large as 800 times 300 astronomical units (au, 1 au equals to the mean distance of the Sun and Earth; 150 million km). Typical size of hot cores around low-mass young stars is several tens to hundred of au, therefore the hot core in MM3 is exceptionally large. Sakai says "Thanks to the high sensitivity and spatial resolution, we need only a few hours to discover a previously unknown baby star. This is an important step to understand the star formation process in a cluster forming region."
The team also observed radio emission from carbon sulfide (CS) and silicon monoxide (SiO) to reveal the detailed structure of the molecular outflow from the baby star. The speed of the emanated gas is 28 km/s and the extent is 4,400 au. Based on these values, the team calculates the age of the outflow of only 740 years. Although molecular outflows are common features around protostars, the outflow as young as the one in MM3 is quite rare. In summary, ALMA finds that the protostar in MM3 is very young but has a giant hot core.
Why the hot core in MM3 is so large? In order to warm up the large volume of gas, the baby star should emit much more energy than typical ones. Protostars produce emission by converting the gravitational energy of infalling material to the thermal energy. The large size of the hot core in MM3 is possibly due to the high mass infalling rate than ever thought. The other possibility is that two or more protostars are embedded in the hot core. The research team has not reached the reason with this observation yet. "ALMA's spatial resolution improves much more in the near future," Sakai says, "Then much detail of the infalling material toward the protostar can be revealed, and it helps us answer to the mystery behind the diversity in star formation."
Surprisingly Simple Scheme for Self-Assembling Robots
Posted Under:
Small cubes with no exterior moving parts can propel themselves forward, jump on top of each other, and snap together to form arbitrary shapes.
In 2011, when an MIT senior named John Romanishin proposed a new design for modular robots to his robotics professor, Daniela Rus, she said, "That can't be done."
Two years later, Rus showed her colleague Hod Lipson, a robotics researcher at Cornell University, a video of prototype robots, based on Romanishin's design, in action. "That can't be done," Lipson said.
In November, Romanishin -- now a research scientist in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) -- Rus, and postdoc Kyle Gilpin will establish once and for all that it can be done, when they present a paper describing their new robots at the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Known as M-Blocks, the robots are cubes with no external moving parts. Nonetheless, they're able to climb over and around one another, leap through the air, roll across the ground, and even move while suspended upside down from metallic surfaces.
Inside each M-Block is a flywheel that can reach speeds of 20,000 revolutions per minute; when the flywheel is braked, it imparts its angular momentum to the cube. On each edge of an M-Block, and on every face, are cleverly arranged permanent magnets that allow any two cubes to attach to each other.
"It's one of these things that the [modular-robotics] community has been trying to do for a long time," says Rus, a professor of electrical engineering and computer science and director of CSAIL. "We just needed a creative insight and somebody who was passionate enough to keep coming at it -- despite being discouraged."
Embodied abstraction
As Rus explains, researchers studying reconfigurable robots have long used an abstraction called the sliding-cube model. In this model, if two cubes are face to face, one of them can slide up the side of the other and, without changing orientation, slide across its top.
The sliding-cube model simplifies the development of self-assembly algorithms, but the robots that implement them tend to be much more complex devices. Rus' group, for instance, previously developed a modular robot called the Molecule, which consisted of two cubes connected by an angled bar and had 18 separate motors. "We were quite proud of it at the time," Rus says.
According to Gilpin, existing modular-robot systems are also "statically stable," meaning that "you can pause the motion at any point, and they'll stay where they are." What enabled the MIT researchers to drastically simplify their robots' design was giving up on the principle of static stability.
"There's a point in time when the cube is essentially flying through the air," Gilpin says. "And you are depending on the magnets to bring it into alignment when it lands. That's something that's totally unique to this system."
That's also what made Rus skeptical about Romanishin's initial proposal. "I asked him build a prototype," Rus says. "Then I said, 'OK, maybe I was wrong.'"
Sticking the landing
To compensate for its static instability, the researchers' robot relies on some ingenious engineering. On each edge of a cube are two cylindrical magnets, mounted like rolling pins. When two cubes approach each other, the magnets naturally rotate, so that north poles align with south, and vice versa. Any face of any cube can thus attach to any face of any other.
The cubes' edges are also beveled, so when two cubes are face to face, there's a slight gap between their magnets. When one cube begins to flip on top of another, the bevels, and thus the magnets, touch. The connection between the cubes becomes much stronger, anchoring the pivot. On each face of a cube are four more pairs of smaller magnets, arranged symmetrically, which help snap a moving cube into place when it lands on top of another.
As with any modular-robot system, the hope is that the modules can be miniaturized: the ultimate aim of most such research is hordes of swarming microbots that can self-assemble, like the "liquid steel" androids in the movie "Terminator II." And the simplicity of the cubes' design makes miniaturization promising.
But the researchers believe that a more refined version of their system could prove useful even at something like its current scale. Armies of mobile cubes could temporarily repair bridges or buildings during emergencies, or raise and reconfigure scaffolding for building projects. They could assemble into different types of furniture or heavy equipment as needed. And they could swarm into environments hostile or inaccessible to humans, diagnose problems, and reorganize themselves to provide solutions.
Strength in diversity
The researchers also imagine that among the mobile cubes could be special-purpose cubes, containing cameras, or lights, or battery packs, or other equipment, which the mobile cubes could transport. "In the vast majority of other modular systems, an individual module cannot move on its own," Gilpin says. "If you drop one of these along the way, or something goes wrong, it can rejoin the group, no problem."
"It's one of those things that you kick yourself for not thinking of," Cornell's Lipson says. "It's a low-tech solution to a problem that people have been trying to solve with extraordinarily high-tech approaches."
"What they did that was very interesting is they showed several modes of locomotion," Lipson adds. "Not just one cube flipping around, but multiple cubes working together, multiple cubes moving other cubes -- a lot of other modes of motion that really open the door to many, many applications, much beyond what people usually consider when they talk about self-assembly. They rarely think about parts dragging other parts -- this kind of cooperative group behavior."
In ongoing work, the MIT researchers are building an army of 100 cubes, each of which can move in any direction, and designing algorithms to guide them. "We want hundreds of cubes, scattered randomly across the floor, to be able to identify each other, coalesce, and autonomously transform into a chair, or a ladder, or a desk, on demand," Romanishin says.
In 2011, when an MIT senior named John Romanishin proposed a new design for modular robots to his robotics professor, Daniela Rus, she said, "That can't be done."
Two years later, Rus showed her colleague Hod Lipson, a robotics researcher at Cornell University, a video of prototype robots, based on Romanishin's design, in action. "That can't be done," Lipson said.
In November, Romanishin -- now a research scientist in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) -- Rus, and postdoc Kyle Gilpin will establish once and for all that it can be done, when they present a paper describing their new robots at the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Known as M-Blocks, the robots are cubes with no external moving parts. Nonetheless, they're able to climb over and around one another, leap through the air, roll across the ground, and even move while suspended upside down from metallic surfaces.
Inside each M-Block is a flywheel that can reach speeds of 20,000 revolutions per minute; when the flywheel is braked, it imparts its angular momentum to the cube. On each edge of an M-Block, and on every face, are cleverly arranged permanent magnets that allow any two cubes to attach to each other.
"It's one of these things that the [modular-robotics] community has been trying to do for a long time," says Rus, a professor of electrical engineering and computer science and director of CSAIL. "We just needed a creative insight and somebody who was passionate enough to keep coming at it -- despite being discouraged."
Embodied abstraction
As Rus explains, researchers studying reconfigurable robots have long used an abstraction called the sliding-cube model. In this model, if two cubes are face to face, one of them can slide up the side of the other and, without changing orientation, slide across its top.
The sliding-cube model simplifies the development of self-assembly algorithms, but the robots that implement them tend to be much more complex devices. Rus' group, for instance, previously developed a modular robot called the Molecule, which consisted of two cubes connected by an angled bar and had 18 separate motors. "We were quite proud of it at the time," Rus says.
According to Gilpin, existing modular-robot systems are also "statically stable," meaning that "you can pause the motion at any point, and they'll stay where they are." What enabled the MIT researchers to drastically simplify their robots' design was giving up on the principle of static stability.
"There's a point in time when the cube is essentially flying through the air," Gilpin says. "And you are depending on the magnets to bring it into alignment when it lands. That's something that's totally unique to this system."
That's also what made Rus skeptical about Romanishin's initial proposal. "I asked him build a prototype," Rus says. "Then I said, 'OK, maybe I was wrong.'"
Sticking the landing
To compensate for its static instability, the researchers' robot relies on some ingenious engineering. On each edge of a cube are two cylindrical magnets, mounted like rolling pins. When two cubes approach each other, the magnets naturally rotate, so that north poles align with south, and vice versa. Any face of any cube can thus attach to any face of any other.
The cubes' edges are also beveled, so when two cubes are face to face, there's a slight gap between their magnets. When one cube begins to flip on top of another, the bevels, and thus the magnets, touch. The connection between the cubes becomes much stronger, anchoring the pivot. On each face of a cube are four more pairs of smaller magnets, arranged symmetrically, which help snap a moving cube into place when it lands on top of another.
As with any modular-robot system, the hope is that the modules can be miniaturized: the ultimate aim of most such research is hordes of swarming microbots that can self-assemble, like the "liquid steel" androids in the movie "Terminator II." And the simplicity of the cubes' design makes miniaturization promising.
But the researchers believe that a more refined version of their system could prove useful even at something like its current scale. Armies of mobile cubes could temporarily repair bridges or buildings during emergencies, or raise and reconfigure scaffolding for building projects. They could assemble into different types of furniture or heavy equipment as needed. And they could swarm into environments hostile or inaccessible to humans, diagnose problems, and reorganize themselves to provide solutions.
Strength in diversity
The researchers also imagine that among the mobile cubes could be special-purpose cubes, containing cameras, or lights, or battery packs, or other equipment, which the mobile cubes could transport. "In the vast majority of other modular systems, an individual module cannot move on its own," Gilpin says. "If you drop one of these along the way, or something goes wrong, it can rejoin the group, no problem."
"It's one of those things that you kick yourself for not thinking of," Cornell's Lipson says. "It's a low-tech solution to a problem that people have been trying to solve with extraordinarily high-tech approaches."
"What they did that was very interesting is they showed several modes of locomotion," Lipson adds. "Not just one cube flipping around, but multiple cubes working together, multiple cubes moving other cubes -- a lot of other modes of motion that really open the door to many, many applications, much beyond what people usually consider when they talk about self-assembly. They rarely think about parts dragging other parts -- this kind of cooperative group behavior."
In ongoing work, the MIT researchers are building an army of 100 cubes, each of which can move in any direction, and designing algorithms to guide them. "We want hundreds of cubes, scattered randomly across the floor, to be able to identify each other, coalesce, and autonomously transform into a chair, or a ladder, or a desk, on demand," Romanishin says.
Snake and eel bodies
Posted Under:
Snake and eel bodies are elongated, slender and flexible in all three dimensions. This striking body plan has evolved many times independently in the more than 500 million years of vertebrate animals history. Based on the current state of knowledge, the extreme elongation of the body axis occurred in one of two ways: either through the elongation of the individual vertebrae of the vertebral column, which thus became longer, or through the development of additional vertebrae and associated muscle segments.
Long body thanks to doubling of the vertebral arches
A team of paleontologists from the University of Zurich headed by Professor Marcelo Sánchez-Villagra now reveal that a third, previously unknown mechanism of axial skeleton elongation characterized the early evolution of fishes, as shown by an exceptionally preserved form. Unlike other known fish with elongate bodies, the vertebral column of Saurichthys curionii does not have one vertebral arch per myomeric segment, but two, which is unique. This resulted in an elongation of the body and gave it an overall elongate appearance. "This evolutionary pattern for body elongation is new," explains Erin Maxwell, a postdoc from Sánchez-Villagra's group. "Previously, we only knew about an increase in the number of vertebrae and muscle segments or the elongation of the individual vertebrae."
The fossils studied come from the Monte San Giorgio find in Ticino, which was declared a world heritage site by UNESCO in 2003. The researchers owe their findings to the fortunate circumstance that not only skeletal parts but also the tendons and tendon attachments surrounding the muscles of the primitive predatory fish had survived intact. Due to the shape and arrangement of the preserved tendons, the scientists are also able to draw conclusions as to the flexibility and swimming ability of the fossilized fish genus.
According to Maxwell, Saurichthys curionii was certainly not as flexible as today's eels and, unlike modern oceanic fishes such as tuna, was probably unable to swim for long distances at high speed. Based upon its appearance and lifestyle, the roughly half-meter-long fish is most comparable to the garfish or needlefish that exist today.
Long body thanks to doubling of the vertebral arches
A team of paleontologists from the University of Zurich headed by Professor Marcelo Sánchez-Villagra now reveal that a third, previously unknown mechanism of axial skeleton elongation characterized the early evolution of fishes, as shown by an exceptionally preserved form. Unlike other known fish with elongate bodies, the vertebral column of Saurichthys curionii does not have one vertebral arch per myomeric segment, but two, which is unique. This resulted in an elongation of the body and gave it an overall elongate appearance. "This evolutionary pattern for body elongation is new," explains Erin Maxwell, a postdoc from Sánchez-Villagra's group. "Previously, we only knew about an increase in the number of vertebrae and muscle segments or the elongation of the individual vertebrae."
The fossils studied come from the Monte San Giorgio find in Ticino, which was declared a world heritage site by UNESCO in 2003. The researchers owe their findings to the fortunate circumstance that not only skeletal parts but also the tendons and tendon attachments surrounding the muscles of the primitive predatory fish had survived intact. Due to the shape and arrangement of the preserved tendons, the scientists are also able to draw conclusions as to the flexibility and swimming ability of the fossilized fish genus.
According to Maxwell, Saurichthys curionii was certainly not as flexible as today's eels and, unlike modern oceanic fishes such as tuna, was probably unable to swim for long distances at high speed. Based upon its appearance and lifestyle, the roughly half-meter-long fish is most comparable to the garfish or needlefish that exist today.
Plastic Waste Is a Hazard for Subalpine Lakes,Too
Posted Under:
Many subalpine lakes may look beautiful and even pristine, but new evidence suggests they may also be contaminated with potentially hazardous plastics. Researchers say those tiny microplastics are likely finding their way into the food web through a wide range of freshwater invertebrates too.
The findings, based on studies of Italy's Lake Garda and reported on October 7th in Current Biology, a Cell Press publication, suggest that the problem of plastic pollution isn't limited to the ocean.
"Next to mechanical impairments of swallowed plastics mistaken as food, many plastic-associated chemicals have been shown to be carcinogenic, endocrine-disrupting, or acutely toxic," said Christian Laforsch of the University of Bayreuth in Germany. "Moreover, the polymers can adsorb toxic hydrophobic organic pollutants and transport these compounds to otherwise less polluted habitats. Along this line, plastic debris can act as vector for alien species and diseases."
The researchers chose Lake Garda as a starting point for investigating freshwater contamination with micro- and macroplastics because they expected it to be less polluted given its subalpine location. What they found was a surprise: the numbers of microplastic particles in sediment samples from Lake Garda were similar to those found in studies of marine beach sediments.
The size ranges of microplastics found by Laforsch's team suggested that they might find their way into organisms living in the lake. Indeed, the researchers showed that freshwater invertebrates from worms to water fleas will ingest artificially ground fluorescent microplastics in the lab.
The findings in Lake Garda come as bad news for lakes generally, with uncertain ecological and economic consequences.
"The mere existence of microplastic particles in a subalpine headwater suggests an even higher relevance of plastic particles in lowland waters," Laforsch said. He recommends more research and standardized surveillance guidelines to control for microplastic contamination in freshwater ecosystems, as is already required for marine systems.
The public can do its part by putting trash where it belongs. The shape and type of plastic particles found in the study indicate that they started as larger pieces of plastic, most likely originating from post-consumer products.
The findings, based on studies of Italy's Lake Garda and reported on October 7th in Current Biology, a Cell Press publication, suggest that the problem of plastic pollution isn't limited to the ocean.
"Next to mechanical impairments of swallowed plastics mistaken as food, many plastic-associated chemicals have been shown to be carcinogenic, endocrine-disrupting, or acutely toxic," said Christian Laforsch of the University of Bayreuth in Germany. "Moreover, the polymers can adsorb toxic hydrophobic organic pollutants and transport these compounds to otherwise less polluted habitats. Along this line, plastic debris can act as vector for alien species and diseases."
The researchers chose Lake Garda as a starting point for investigating freshwater contamination with micro- and macroplastics because they expected it to be less polluted given its subalpine location. What they found was a surprise: the numbers of microplastic particles in sediment samples from Lake Garda were similar to those found in studies of marine beach sediments.
The size ranges of microplastics found by Laforsch's team suggested that they might find their way into organisms living in the lake. Indeed, the researchers showed that freshwater invertebrates from worms to water fleas will ingest artificially ground fluorescent microplastics in the lab.
The findings in Lake Garda come as bad news for lakes generally, with uncertain ecological and economic consequences.
"The mere existence of microplastic particles in a subalpine headwater suggests an even higher relevance of plastic particles in lowland waters," Laforsch said. He recommends more research and standardized surveillance guidelines to control for microplastic contamination in freshwater ecosystems, as is already required for marine systems.
The public can do its part by putting trash where it belongs. The shape and type of plastic particles found in the study indicate that they started as larger pieces of plastic, most likely originating from post-consumer products.
Subscribe to:
Posts (Atom)