The human brain keeps changing throughout a person’s lifetime. New connections are continually created while synapses that are no longer in use degenerate. To date, little is known about the mechanisms behind these processes. Jülich neuroinformatician Dr. Markus Butz has now been able to ascribe the formation of new neural networks in the visual cortex to a simple homeostatic rule that is also the basis of many other self-regulating processes in nature. With this explanation, he and his colleague Dr. Arjen van Ooyen from Amsterdam also provide a new theory on the plasticity of the brain – and a novel approach to understanding learning processes and treating brain injuries and diseases.
The brains of adult humans are by no means hard wired. Scientists have repeatedly established this fact over the last few years using different imaging techniques. This so-called neuroplasticity not only plays a key role in learning processes, it also enables the brain to recover from injuries and compensate for the loss of functions. Researchers only recently found out that even in the adult brain, not only do existing synapses adapt to new circumstances, but new connections are constantly formed and reorganized. However, it was not yet known how these natural rearrangement processes are controlled in the brain. In the open-access journal PLOS Computational Biology, Butz and van Ooyen now present a simple rule that explains how these new networks of neurons are formed (DOI: 10.1371/journal.pcbi.1003259).
“It’s very likely that the structural plasticity of the brain is the basis for long-term memory formation,” says Markus Butz, who has been working at the recently established Simulation Laboratory Neuroscience at the Jülich Supercomputing Centre for the past few months. “And it’s not just about learning. Following the amputation of extremities, brain injury, the onset of neurodegenerative diseases, and strokes, huge numbers of new synapses are formed in order to adapt the brain to the lasting changes in the patterns of incoming stimuli.”
Activity regulates synapse formation
These results show that the formation of new synapses is driven by the tendency of neurons to maintain a ‘pre-set’ electrical activity level. If the average electric activity falls below a certain threshold, the neurons begin to actively build new contact points. These are the basis for new synapses that deliver additional input – the neuron firing rate increases. This also works the other way round: as soon as the activity level exceeds an upper limit, the number of synaptic connections is reduced to prevent any overexcitation – the neuron firing rate falls. Similar forms of homeostasis frequently occur in nature, for example in the regulation of body temperature and blood sugar levels.
However, Markus Butz stresses that this does not work without a certain minimal excitation of the neurons: “A neuron that no longer receives any stimuli loses even more synapses and will die off after some time. We must take this restriction into account if we want the results of our simulations to agree with observations.” Using the visual cortex as an example, the neuroscientists have studied the principles according to which neurons form new connections and abandon existing synapses. In this region of the brain, about 10 % of the synapses are continuously regenerated. When the retina is damaged, this percentage increases even further. Using computer simulations, the authors succeeded in reconstructing the reorganization of the neurons in a way that conforms to experimental results from the visual cortex of mice and monkeys with damaged retinas.
The visual cortex is particularly suitable for demonstrating the new growth rule, because it has a property referred to as retinotopy: This means that points projected beside each other onto the retina are also arranged beside each other when they are projected onto the visual cortex, just like on a map. If areas of the retina are damaged, the cells onto which the associated images are projected receive different inputs. “In our simulations, you can see that areas which no longer receive any input from the retina start to build crosslinks, which allow them to receive more signals from their neighbouring cells,” says Markus Butz. These crosslinks are formed slowly from the edge of the damaged area towards the centre, in a process resembling the healing of a wound, until the original activity level is more or less restored.
Synaptic and structural plasticity
“The new growth rule provides structural plasticity with a principle that is almost as simple as that of synaptic plasticity,” says co-author Arjen van Ooyen, who has been working on models for the development of neural networks for decades. As early as 1949, psychology professor Donald Olding Hebb discovered that connections between neurons that are frequently activated will become stronger. Those that exchange little information will become weaker. Today, many scientists believe that this Hebbian principle plays a central role in learning and memory processes. While synaptic plasticity in involved primarily in short-term processes that take from a few milliseconds to several hours, structural plasticity extends over longer time scales, from several days to months.
Structural plasticity therefore plays a particularly important part during the (early) rehabilitation phase of patients affected by neurological diseases, which also lasts for weeks and months. The vision driving the project is that valuable ideas for the treatment of stroke patients could result from accurate predictions of synapse formation. If doctors knew how the brain structure of a patient will change and reorganize during treatment, they could determine the ideal times for phases of stimulation and rest, thus improving treatment efficiency.
New approach for numerous applications
“It was previously assumed that structural plasticity also follows the principle of Hebbian plasticity. The findings suggest that structural plasticity is governed by the homeostatic principle instead, which was not taken into consideration before,” says Prof. Abigail Morrison, head of the Simulation Laboratory Neuroscience at Jülich. Her team is already integrating the new rule into the freely accessible simulation software NEST, which is used by numerous scientists worldwide.
These findings are also of relevance for the Human Brain Project. Neuroscientists, medical scientists, computer scientists, physicists, and mathematicians in Europe are working hand in hand to simulate the entire human brain on high-performance computers of the next generation in order to better understand how it functions. “Due to the complex synaptic circuitry in the human brain, it’s not plausible that its fault tolerance and flexibility are achieved based on static connection rules. Models are therefore required for a self-organization process,” says Prof. Markus Diesmann from Jülich’s Institute of Neuroscience and Medicine, who is involved in the project. He heads Computational and Systems Neuroscience (INM-6), a subinstitute working at the interface between neuroscientific research and simulation technology.
What does it mean to be a civilized person? A civilized nation? How are these notions changing over time? And from one country to another? In the recently concluded project Civility, Virtue and Emotions in Europe and Asia, researchers from several different countries and disciplines have studied these questions. One of the initiators is Professor Helge Jordheim, Academic Director for the inter-faculty research programme KULTRANS.
Jordheim and his colleagues have studied what was considered to be civilized behaviour in Europe and Asia in the late 19th and early 20th centuries.
– Western identity and mores were formed by the encounter with non-Western cultures, Jordheim states.
The period studied by the researchers was one characterized by imperialism. In light of this, the relationship between “the West and the rest” is particularly interesting, Jordheim claims.
– In Western Europe, the prevailing notion was “civilization, that’s us”. Even in Asia, the idea that standards were defined by the West tended to prevail. Implicitly, the objective was: how can we catch up with the West?
A boost in self-confidence
At the same time, there was a clear perception in Asia about not just mimicking the West, Jordheim emphasizes. The Asian countries were concerned with “finding their own path”.
– A challenge for the entire project has consisted in avoiding the pitfall of thinking that all influence emanated from Western Europe. It’s not as simple as that. For example, we can see that there was a widespread exchange of ideas between the Ottoman Empire and the Arabic and Persian cultures, which also had an impact on the Urdu-speaking population of India. Thus, the influence appears to be far less homogenous than we have previously assumed, Jordheim says.
He believes that the Russo-Japanese War in the early 20th century was a key event for the Asian civilizing process.
– This was the first time that Asia defeated the West. It resulted in a real boost in self-confidence, and had an impact on the kinds of ideas that were nurtured, Jordheim says.
Similarly, the researchers have been interested in how the civilizing influence to some extent also ran in the opposite direction – from East to West.
The written word shapes our thoughts
Jordheim and his colleagues have mainly studied different types of texts from the countries included in the project.
– We have looked at a lot of self-help literature, such as “how to become a better person” and literature on “etiquette”. We have also studied political documents that present ideas of how the nation should be formed. In addition, we have studied texts from encyclopaedias, which help explain concepts.
Jordheim points out that such texts help mould the views of the population in a particular manner.
– They help inculcate and foster certain emotions, while suppressing others. To render a population more civilized, changes must occur at the level of the individual, he says.
Scandinavia: A natural paradox
Jordheim’s own research has focused on the concept of civilization in Scandinavia – a region which is rarely included when processes of civilization are being studied.
– Scandinavia stands apart because civilization is relatively unimportant as a notion. Here, the concept of dannelse (formation) is used to refer to the same, Jordheim states.
Scandinavia is different also in other respects, mainly due to the population’s relationship to nature.
– The entire idea of civilization involves abandoning nature and the natural state. This is problematic in Scandinavia, and especially in Norway, since so much of our identity is associated with nature, he says.
Jordheim argues that much of what is traditionally regarded as a development in the right direction in other countries will not be perceived in the same way in Scandinavia.
For example, migration to the cities will not necessarily be regarded as a sign of progress. The Scandinavian discourse fosters ideas and notions of nature as the ideal – not civilization as such. Viewed thus, the idea of civilization is paradoxical, he says.
Fertile ground for Social Darwinism
Jordheim believes that this notion of nature is also reflected in the kinds of emotions that are regarded as “civilized” in Scandinavia.
– In many other countries, the civilizing process entails that emotions must be curbed. This is not necessarily so in Scandinavia. Emotions that are presumed to be natural, such as courage, anger and maternal instincts, are also regarded as desirable, he says.
In his research, Jordheim has been concerned with how these notions of civilization, nature and emotions helped Social Darwinism gain a firm foothold in Scandinavia, especially in Norway.
– Ideas that were explained on the basis of nature had already gained widespread acceptance. With Social Darwinism, “civilized” ideas could be integrated while nature maintained its position, Jordheim claims.
Private and global
Jordheim says that what makes the project particularly interesting is its wide scope – from the private realm, to global matters.
– On the one hand, this is about how you behave within your own home. How do you behave towards your wife and children? What is the ideal? At the same time, this is about the nation and the world order.
– It is important for all countries to appear civilized. How civilized a country is considered to be determines its position in the “global pecking order”, he says.
Biologists of the University of Zurich have developed a method to visualize the activity of genes in single cells. The method is so efficient that, for the first time, a thousand genes can be studied in parallel in ten thousand single human cells. Applications lie in fields of basic research and medical diagnostics. The new method shows that the activity of genes, and the spatial organization of the resulting transcript molecules, strongly vary between single cells.
Whenever cells activate a gene, they produce gene specific transcript molecules, which make the function of the gene available to the cell. The measurement of gene activity is a routine activity in medical diagnostics, especially in cancer medicine. Today’s technologies determine the activity of genes by measuring the amount of transcript molecules. However, these technologies can neither measure the amount of transcript molecules of one thousand genes in ten thousand single cells, nor the spatial organization of transcript molecules within a single cell. The fully automated procedure, developed by biologists of the University of Zurich under the supervision of Prof. Lucas Pelkmans, allows, for the first time, a parallel measurement of the amount and spatial organization of single transcript molecules in ten thousands single cells. The results, which were recently published in the scientific journal Nature Methods, provide completely novel insights into the variability of gene activity of single cells.
Robots, a fluorescence microscope and a supercomputer
The method developed by Pelkmans’ PhD students Nico Battich and Thomas Stoeger is based upon the combination of robots, an automated fluorescence microscope and a supercomputer. “When genes become active, specific transcript molecules are produced. We can stain them with the help of a robot”, explains Stoeger. Subsequently, fluorescence microscope images of brightly glowing transcript molecules are generated. Those images were analyzed with the supercomputer Brutus, of the ETH Zurich. With this method, one thousand human genes can be studied in ten thousand single cells. According to Pelkmans, the advantages of this method are the high number of single cells and the possibility to study, for the first time, the spatial organization of the transcript molecules of many genes.
New insights into the spatial organization of transcript molecules
The analysis of the new data shows that individual cells distinguish themselves in the activity of their genes. While the scientists had been suspecting a high variability in the amount of transcript molecules, they were surprised to discover a strong variability in the spatial organization of transcript molecules within single cells and between multiple single cells. The transcript molecules adapted distinctive patterns.
“We realized that genes with a similar function also have a similar variability in the transcript patterns,” explains Battich. “This similarity exceeds the variability in the amount of transcript molecules, and allows us to predict the function of individual genes.” The scientists suspect that transcript patterns are a countermeasure against the variability in the amount of transcript molecules. Thus, such patterns would be responsible for the robustness of processes within a cell.
The importance of these new insights was summarized by Pelkmans: “Our method will be of importance to basic research and the understanding of cancer tumors because it allows us to map the activity of genes within single tumor cells.”
Full bibliographic informationNico Battich, Thomas Stoeger, Lucas Pelkmans. "Image-based transcriptomics in thousands of single human cells at single-molecule resolution." Nature Methods. DOI: 10.1038/nmeth.2657
Soil micronutrients are essential to plants, animals and humans. Lack of these elements can retard growth and thus cause severe problems. The soil micronutrients in Africa are now being mapped extensively for the first time in order to improve local food security. The study is done within the research and development programme FoodAfrica, coordinated by MTT Agrifood Research Finland.
There is very little and scattered information on soil micronutrients and their deficiencies in African agricultural lands, even though notable deficiencies are likely to be found. Deficiencies of some elements, like iron, are known to cause severe problems to humans, like dwarfing of children, whereas lack of zinc and some other elements may retard plant growth, cause poor yields and reduce the effect of other nutrients.
– It is important to carry out surveys on soil micronutrients so that we can improve their concentrations through fertilizers or catch crops, for example. We can also improve the well-being of humans and animals through a more diversified diet, says Riikka Keskinen, Senior Research Scientist at MTT Agrifood Research Finland.
Mapping of micronutrients supports local food production
MTT Agrifood Research Finland and World Agroforestry Centre ICRAF are now mapping the soil micronutrients in Sub-Saharan Africa within the FoodAfrica programme. The goal is to get a common picture of the micronutrient status and distribution in the region.
– Once we get a common view on the micronutrient status, we may continue to work with the areas where deficiencies are most likely to be found. In those areas we can find solutions to the problems by doing more thorough soil testing and field experiments. We can also demonstrate different kinds of solutions to the local farmers and authorities, says Professor Martti Esala from MTT Agrifood Research Finland.
The soil micronutrient mapping is linked to the African Soil Information Service (AfSIS), an ICRAF-supported project that is developing continent-wide digital soil maps for sub-Saharan Africa using new types of soil analysis and statistical methods.
– We are working on new, low-cost and fast methods of soil and plant micronutrient analysis that use only light instead of chemicals, which will make it easier to produce this kind of information even for smallholders in Africa, says Dr. Keith Shepherd from ICRAF.
Small cubes with no exterior moving parts can propel themselves forward, jump on top of each other, and snap together to form arbitrary shapes.
Video available: http://www.youtube.com/watch?v=mOqjFa4RskA
CAMBRIDGE, MA -- In 2011, when an MIT senior named John Romanishin proposed a new design for modular robots to his robotics professor, Daniela Rus, she said, “That can’t be done.”
Two years later, Rus showed her colleague Hod Lipson, a robotics researcher at Cornell University, a video of prototype robots, based on Romanishin’s design, in action. “That can’t be done,” Lipson said.
In November, Romanishin — now a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) — Rus, and postdoc Kyle Gilpin will establish once and for all that it can be done, when they present a paper describing their new robots at the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Known as M-Blocks, the robots are cubes with no external moving parts. Nonetheless, they’re able to climb over and around one another, leap through the air, roll across the ground, and even move while suspended upside down from metallic surfaces.
Inside each M-Block is a flywheel that can reach speeds of 20,000 revolutions per minute; when the flywheel is braked, it imparts its angular momentum to the cube. On each edge of an M-Block, and on every face, are cleverly arranged permanent magnets that allow any two cubes to attach to each other.
“It’s one of these things that the [modular-robotics] community has been trying to do for a long time,” says Rus, a professor of electrical engineering and computer science and director of CSAIL. “We just needed a creative insight and somebody who was passionate enough to keep coming at it — despite being discouraged.”
As Rus explains, researchers studying reconfigurable robots have long used an abstraction called the sliding-cube model. In this model, if two cubes are face to face, one of them can slide up the side of the other and, without changing orientation, slide across its top.
The sliding-cube model simplifies the development of self-assembly algorithms, but the robots that implement them tend to be much more complex devices. Rus’ group, for instance, previously developed a modular robot called the Molecule, which consisted of two cubes connected by an angled bar and had 18 separate motors. “We were quite proud of it at the time,” Rus says.
According to Gilpin, existing modular-robot systems are also “statically stable,” meaning that “you can pause the motion at any point, and they’ll stay where they are.” What enabled the MIT researchers to drastically simplify their robots’ design was giving up on the principle of static stability.
“There’s a point in time when the cube is essentially flying through the air,” Gilpin says. “And you are depending on the magnets to bring it into alignment when it lands. That’s something that’s totally unique to this system.”
That’s also what made Rus skeptical about Romanishin’s initial proposal. “I asked him build a prototype,” Rus says. “Then I said, ‘OK, maybe I was wrong.’”
Sticking the landing
To compensate for its static instability, the researchers’ robot relies on some clever engineering. On each edge of a cube are two cylindrical magnets, mounted like rolling pins. When two cubes approach each other, the magnets naturally rotate, so that north poles align with south, and vice versa. Any face of any cube can thus attach to any face of any other.
The cubes’ edges also have a slight bevel, so when two cubes are face to face, there’s a slight gap between their magnets. When one cube begins to flip on top of another, the bevels, and thus the magnets, touch. The connection between the cubes becomes much stronger, anchoring the pivot. On each face of a cube are four more pairs of smaller magnets, arranged symmetrically, which help snap a moving cube into place when it lands on top of another.
As with any modular-robot system, the hope is that the modules can be miniaturized: the ultimate aim of most such research is hordes of swarming microbots that can self-assemble, like the “liquid steel” androids in the movie “Terminator II.” And the simplicity of the cubes’ design makes miniaturization promising.
But the researchers believe that a more refined version of their system could prove useful even at something like its current scale. Swarms of mobile cubes could temporarily repair bridges or buildings during emergencies, or raise and reconfigure scaffolding for building projects. They could assemble into different types of furniture or heavy equipment as needed. And they could swarm into environments hostile or inaccessible to humans, diagnose problems, and reorganize themselves to provide solutions.
Strength in diversity
The researchers also imagine that among the mobile cubes could be special-purpose cubes, containing cameras, or lights, or battery packs, or other equipment, which the mobile cubes could transport. “In the vast majority of other modular systems, an individual module cannot move on its own,” Gilpin says. “If you drop one of these along the way, or something goes wrong, it can rejoin the group, no problem.”
In ongoing work, the MIT researchers are building an army of 100 cubes, each of which can move in any direction, and designing algorithms to guide them. “We want hundreds of cubes, scattered randomly across the floor, to be able to identify each other, coalesce, and autonomously transform into a chair, or a ladder, or a desk, on demand,” Romanishin says.
Cloud-chamber experiments show that clouds on Mars form in much more humid conditions than clouds on Earth.
CAMBRIDGE, Mass-- At first glance, Mars’ clouds might easily be mistaken for those on Earth: Images of the Martian sky, taken by NASA’s Opportunity rover, depict gauzy, high-altitude wisps, similar to our cirrus clouds. Given what scientists know about the Red Planet’s atmosphere, these clouds likely consist of either carbon dioxide or water-based ice crystals. But it’s difficult to know the precise conditions that give rise to such clouds without sampling directly from a Martian cloud.
Researchers at MIT have now done the next-best thing: They’ve recreated Mars-like conditions within a three-story-tall cloud chamber in Germany, adjusting the chamber’s temperature and relative humidity to match conditions on Mars — essentially forming Martian clouds on Earth.
While the researchers were able to create clouds at the frigid temperatures typically found on Mars, they discovered that cloud formation in such conditions required adjusting the chamber’s relative humidity to 190 percent — far greater than cloud formation requires on Earth. The finding should help improve conventional models of the Martian atmosphere, many of which assume that Martian clouds require humidity levels similar to those found on Earth.
“A lot of atmospheric models for Mars are very simple,” says Dan Cziczo, the Victor P. Starr Associate Professor of Atmospheric Chemistry at MIT. “They have to make gross assumptions about how clouds form: As soon as it hits 100 percent humidity, boom, you get a cloud to form. But we found you need more to kick-start the process.”
Cziczo says the group’s experimental results will help to improve Martian climate models, as well as scientists’ understanding of how the planet transports water through the atmosphere. He and his colleagues have reported their findings in Journal of Geophysical Research: Planets.
Seeding Martian clouds
The team conducted most of the study’s experiments during the summer of 2012 in Karlsruhe, Germany, at the Aerosol Interaction and Dynamics in the Atmosphere (AIDA) facility — a former nuclear reactor that has since been converted into the world’s largest cloud chamber.
The facility was originally designed to study atmospheric conditions on Earth. But Cziczo realized that with a little fine-tuning, the chamber could be adapted to simulate conditions on Mars. To do this, the team first pumped all the oxygen out of the chamber, and instead filled it with inert nitrogen or carbon dioxide — the most common components of the Martian atmosphere. They then created a dust storm, pumping in fine particles similar in size and composition to the mineral dust found on Mars. Much like on Earth, these particles act as cloud seeds around which water vapor can adhere to form cloud particles.
After “seeding” the chamber, the researchers adjusted the temperature, first setting it to the coldest temperatures at which clouds form on Earth (around minus 81 degrees Fahrenheit). Throughout the experiment, they cranked the temperature progressively lower, eventually stopping at the chamber’s lowest setting, around minus 120 Fahrenheit — “a warm summer’s day on Mars,” Cziczo says.
By adjusting the chamber’s relative humidity under each temperature condition, the researchers were able to create clouds under warmer, Earth-like temperatures, at expected relative humidities. These observations gave the researchers confidence in their experimental setup as they attempted to grow clouds at temperatures that approached Mars-like conditions.
Dialing the temperature down
Over a week, the group created 10 clouds, with each cloud taking about 15 minutes to form. The chamber is completed insulated, so the researchers used a system of lasers, which beam across the chamber, to detect cloud formation. Any clouds that form scatter laser light; this scattering is then detected and recorded by computers, which display the results — the size, number, and composition of cloud particles — for scientists outside the chamber.
By analyzing this data over the following six months, the researchers found that clouds that grew at the lowest temperatures required extremely high relative humidity in order for water vapor to form an ice crystal around a dust particle. Cziczo says it’s unclear why Martian clouds need such humid conditions to take shape, but hopes to investigate the question further.
Toward that end, the group plans to return to Germany next fall, when the chamber will have undergone renovations, enabling it to perform cloud experiments at even lower temperatures — conditions that may more closely mimic the icy atmosphere on Mars.
“If we want to understand where water goes and how it’s transported through the atmosphere on Mars, we have to understand cloud formation for that planet,” Cziczo says. “Hopefully this will move us toward the right direction.”
Device could open up new areas of research on materials and biological samples at tiny scales.
CAMBRIDGE, MA Researchers at MIT, working with partners at NASA, have developed a new concept for a microscope that would use neutrons — subatomic particles with no electrical charge — instead of beams of light or electrons to create high-resolution images.
Among other features, neutron-based instruments have the ability to probe inside metal objects — such as fuel cells, batteries, and engines, even when in use — to learn details of their internal structure. Neutron instruments are also uniquely sensitive to magnetic properties and to lighter elements that are important in biological materials.
The new concept has been outlined in a series of research papers this year, including one published this week in Nature Communications by MIT postdoc Dazhi Liu, research scientist Boris Khaykovich, professor David Moncton, and four others.
Moncton, an adjunct professor of physics and director of MIT’s Nuclear Reactor Laboratory, says that Khaykovich first proposed the idea of adapting a 60-year-old concept for a way of focusing X-rays using mirrors to the challenge of building a high-performing neutron microscope. Until now, most neutron instruments have been akin to pinhole cameras: crude imaging systems that simply let light through a tiny opening. Without efficient optical components, such devices produce weak images with poor resolution.
Beyond the pinhole
“For neutrons, there have been no high-quality focusing devices,” Moncton says. “Essentially all of the neutron instruments developed over a half-century are effectively pinhole cameras.” But with this new advance, he says, “We are turning the field of neutron imaging from the era of pinhole cameras to an era of genuine optics.”
“The new mirror device acts like the image-forming lens of an optical microscope,” Liu adds.
Because neutrons interact only minimally with matter, it’s difficult to focus beams of them to create a telescope or microscope. But a basic concept was proposed, for X-rays, by Hans Wolter in 1952 and later developed, under the auspices of NASA, for telescopes such as the orbiting Chandra X-ray Observatory (which was designed and is managed by scientists at MIT). Neutron beams interact weakly, much like X-rays, and can be focused by a similar optical system.
It’s well known that light can be reflected by normally nonreflective surfaces, so long as it strikes that surface at a shallow angle; this is the basic physics of a desert mirage. Using the same principle, mirrors with certain coatings can reflect neutrons at shallow angles.
A sharper, smaller device
The actual instrument uses several reflective cylinders nested one inside the other, so as to increase the surface area available for reflection. The resulting device could improve the performance of existing neutron imaging systems by a factor of about 50, the researchers say — allowing for much sharper images, much smaller instruments, or both.
The team initially designed and optimized the concept digitally, then fabricated a small test instrument as a proof-of-principle and demonstrated its performance using a neutron beam facility at MIT’s Nuclear Reactor Laboratory. Later work, requiring a different spectrum of neutron energies, was carried out at Oak Ridge National Laboratory (ORNL) and at the National Institute of Standards and Technology (NIST).
Such a new instrument could be used to observe and characterize many kinds of materials and biological samples; other nonimaging methods that exploit the scattering of neutrons might benefit as well. Because the neutron beams are relatively low-energy, they are “a much more sensitive scattering probe,” Moncton says, for phenomena such as “how atoms or magnetic moments move in a material.”
The researchers next plan to build an optimized neutron-microscopy system in collaboration with NIST, which already has a major neutron-beam research facility. This new instrument is expected to cost a few million dollars.
Moncton points out that a recent major advance in the field was the construction of a $1.4 billion facility that provides a tenfold increase in neutron flux. “Given the cost of producing the neutron beams, it is essential to equip them with the most efficient optics possible,” he says.
In addition to the researchers at MIT, the team included Mikhail Gubarev and Brian Ramsey of NASA’s Marshall Space Flight Center and Lee Robertson and Lowell Crow of ORNL. The work was supported by the U.S. Department of Energy.
Drilling cores from Switzerland have revealed the oldest known fossils of direct ancestors of flowering plants. These beautifully preserved 240-million-year-old pollen grains are evidence that flowering plants evolved 100 million years earlier than previously thought, according to Rsearchers from the University of Zurich.
Flowering plants evolved from extinct plants related to conifers, ginkgos, cycads, and seed ferns. The oldest known fossils from flowering plants are pollen grains. These are small, robust and numerous and therefore fossilize more easily than leaves and flowers. An uninterrupted sequence of fossilized pollen from flowers begins in the Early Cretaceous, approximately 140 million years ago, and it is generally assumed that flowering plants first evolved around that time. But the present study documents flowering plant-like pollen that is 100 million years older, implying that flowering plants may have originated in the Early Triassic (between 252 to 247 million years ago) or even earlier.
Many studies have tried to estimate the age of flowering plants from molecular data, but so far no consensus has been reached. Depending on dataset and method, these estimates range from the Triassic to the Cretaceous. Molecular estimates typically need to be “anchored” in fossil evidence, but extremely old fossils were not available for flowering plants. “That is why the present finding of flower-like pollen from the Triassic is significant”, says Prof. Peter Hochuli, University of Zurich.
Peter Hochuli and Susanne Feist-Burkhardt from Paleontological Institute and Museum, University of Zürich, studied two drilling cores from Weiach and Leuggern, northern Switzerland, and found pollen grains that resemble fossil pollen from the earliest known flowering plants. With Confocal Laser Scanning Microscopy, they obtained high-resolution images across three dimensions of six different types of pollen.
In a previous study from 2004, Hochuli and Feist-Burkhardt documented different, but clearly related flowering-plant-like pollen from the Middle Triassic in cores from the Barents Sea, south of Spitsbergen. The samples from the present study were found 3000 km south of the previous site. “We believe that even highly cautious scientists will now be convinced that flowering plants evolved long before the Cretaceous”, say Hochuli.
What might these primitive flowering plants have looked like? In the Middle Triassic, both the Barents Sea and Switzerland lay in the subtropics, but the area of Switzerland was much drier than the region of the Barents Sea. This implies that these plants occurred a broad ecological range. The pollen’s structure suggests that the plants were pollinated by insects: most likely beetles, as bees would not evolve for another 100 million years.
Peter A. Hochuli and Susanne Feist-Burkhardt. Angiosperm-like pollen and Afropollis from the Middle Triassic (Anisian) of the Germanic Basin (Northern Switzerland). Journal: Frontiers in Plant Science. DOI: 10.3389/fpls.2013.00344
FOTO SU: VEDI TU
The elongated body of some present-day fish evolved in different ways. Paleontologists from the University of Zurich have now discovered a new mode of body elongation based on a discovery in an exceptionally preserved fossilfish from Southern Ticino. In Saurichthys curionii, an early ray-finned fish, the vertebral arches of the axial skeleton doubled, resulting in the elongation of its body and giving it a needlefish-like appearance. The 240-million-year-old fossil find from Switzerland also revealed that this primitive fish was not as flexible as today’s eels, nor could it swim as fast or untiringly as a tuna.
Snake and eel bodies are elongated, slender and flexible in all three dimensions. This striking body plan has evolved many times independently in the more than 500 million years of vertebrate animals history. Based on the current state of knowledge, the extreme elongation of the body axis occurred in one of two ways: either through the elongation of the individual vertebrae of the vertebral column, which thus became longer, or through the development of additional vertebrae and associated muscle segments.
Long body thanks to doubling of the vertebral arches
A team of paleontologists from the University of Zurich headed by Professor Marcelo Sánchez-Villagra now reveal that a third, previously unknown mechanism of axial skeleton elongation characterized the early evolution of fishes, as shown by an exceptionally preserved form. Unlike other known fish with elongate bodies, the vertebral column of Saurichthys curionii does not have one vertebral arch per myomeric segment, but two, which is unique. This resulted in an elongation of the body and gave it an overall elongate appearance. “This evolutionary pattern for body elongation is new,” explains Erin Maxwell, a postdoc from Sánchez-Villagra’s group. “Previously, we only knew about an increase in the number of vertebrae and muscle segments or the elongation of the individual vertebrae.”
The fossils studied come from the Monte San Giorgio find in Ticino, which was declared a world heritage site by UNESCO in 2003. The researchers owe their findings to the fortunate circumstance that not only skeletal parts but also the tendons and tendon attachments surrounding the muscles of the primitive predatory fish had survived intact. Due to the shape and arrangement of the preserved tendons, the scientists are also able to draw conclusions as to the flexibility and swimming ability of the fossilized fish genus. According to Maxwell, Saurichthys curionii was certainly not as flexible as today’s eels and, unlike modern oceanic fishes such as tuna, was probably unable to swim for long distances at high speed. Based upon its appearance and lifestyle, the roughly half-meter-long fish is most comparable to the garfish or needlefish that exist today.
Erin E. Maxwell, Heinz Furrer, Marcelo R. Sánchez-Villagra. Exceptional fossil preservation demonstrates a new mode of axial skeleton elongation in early ray-finned fishes. Nature Communications, October 7, 2013. doi: 10.1038/ncomms3570
FOTO CON DIDASCALIA CHE MERITA SU
Paleoanthropologists from the University of Zurich have uncovered the intact skull of an early Homo individual in Dmanisi, Georgia. This find is forcing a change in perspective in the field of paleoanthropology: human species diversity two million years ago was much smaller than presumed thus far. However, diversity within the «Homo erectus», the first global species of human, was as great as in humans today.
This is the best-preserved fossil find yet from the early era of our genus. The particularly interesting aspect is that it displays a combination of features that were unknown to us before the find. The skull, found in Dmanisi by anthropologists from the University of Zurich as part of a collaboration with colleagues in Georgia funded by the Swiss National Science Foundation, has the largest face, the most massively built jaw and teeth and the smallest brain within the Dmanisi group.
It is the fifth skull to be discovered in Dmanisi. Previously, four equally well-preserved hominid skulls as well as some skeletal parts had been found there. Taken as a whole, the finds show that the first representatives of the genus Homo began to expand from Africa through Eurasia as far back as 1.85 million years ago.
Diversity within a species instead of species diversity
Because the skull is completely intact, it can provide answers to various questions which up until now had offered broad scope for speculation. These relate to none less than the evolutionary beginning of the genus «Homo» in Africa around two million years ago at the beginning of the Ice Age, also referred to as the Pleistocene. Were there several specialized «Homo» species in Africa at the time, at least one of which was able to spread outside of Africa too? Or was there just one single species that was able to cope with a variety of ecosystems? Although the early Homo finds in Africa demonstrate large variation, it has not been possible to decide on answers to these questions in the past. One reason for this relates to the fossils available, as Christoph Zollikofer, anthropologist at the University of Zurich, explains: «Most of these fossils represent single fragmentary finds from multiple points in space and geological time of at least 500,000 years. This ultimately makes it difficult to recognize variation among species in the African fossils as opposed to variation within species».
As many species as there are researchers
Marcia Ponce de León, who is also an anthropologist at the University of Zurich, points out another reason: paleoanthropologists often tacitly assumed that the fossil they had just found was representative for the species, i.e. that it aptly demonstrated the characteristics of the species. Statistically this is not very likely, she says, but nevertheless there were researchers who proposed up to five contemporary species of early «Homo» in Africa, including «Homo habilis», «Homo rudolfensis», «Homo ergaster» and «Homo erectus». Ponce de León sums up the problem as follows: «At present there are as many subdivisions between species as there are researchers examining this problem».
Tracking development of «Homo erectus» over one million years thanks to a change in perspective
Dmanisi now offers the key to the solution. According to Zollikofer, the reason why Skull 5 is so important is that it unites features that have been used previously as an argument for defining different African «species». In other words: «Had the braincase and the face of the Dmanisi sample been found as separate fossils, they very probably would have been attributed to two different species». Ponce de León adds: «It is also decisive that we have five well-preserved individuals in Dmanisi whom we know to have lived in the same place and at the same time». These unique circumstances of the find make it possible to compare variation in Dmanisi with variation in modern human and chimpanzee populations. Zollikofer summarizes the result of the statistical analyses as follows: «Firstly, the Dmanisi individuals all belong to a population of a single early Homo species. Secondly, the five Dmanisi individuals are conspicuously different from each other, but not more different than any five modern human individuals, or five chimpanzee individuals from a given population».
Diversity within a species is thus the rule rather than the exception. The present findings are supported by an additional study recently published in the PNAS journal. In that study, Ponce de León, Zollikofer and further colleagues show that differences in jaw morphology between the Dmanisi individuals are mostly due to differences in dental wear.
This shows the need for a change in perspective: the African fossils from around 1.8 million years ago likely represent representatives from one and the same species, best described as «Homo erectus». This would suggest that «Homo erectus» evolved about 2 million years ago in Africa, and soon expanded through Eurasia – via places such as Dmanisi – as far as China and Java, where it is first documented from about 1.2 million years ago. Comparing diversity patterns in Africa, Eurasia and East Asia provides clues on the population biology of this first global human species.
This makes «Homo erectus» the first «global player» in human evolution. Its redefinition now provides an opportunity to track this fossil human species over a time span of 1 million years.
David Lordkipanidze, Marcia S. Ponce de León, Ann Margvelashvili, Yoel Rak, G. Philip Rightmire, Abesalom Vekua, and Christoph P.E. Zollikofer. A complete skull from Dmanisi, Georgia, and the evolutionary biology of early Homo. Science. October 18, 2013. doi: 10.1126/science.1238484
Ann Margvelashvili, Christoph P. E. Zollikofer, David Lordkipanidze, Timo Peltomäki, Marcia S. Ponce de León. Tooth wear and dentoalveolar remodeling are key factors of morphological variation in the Dmanisi mandibles. Proceedings of the National Academy of Sciences of the United States of America (PNAS). September 2, 2013. doi: 10.1073/pnas.1316052110
Research and development collaborative work with huge leverage
The new research findings on Dmanisi are based on collaborative work ongoing for many years between the Anthropological Institute at the University of Zurich and the Georgian National Museum in Tbilisi. The Dmanisi project is financed by SCOPES (Scientific co-operation between Eastern Europe and Switzerland), a research program co-funded by the Swiss National Science Foundation (SNSF) and the Swiss Agency for Development and Co-operation (SDC). This research tool operates with a comparatively modest budget, but has a major and positive impact on the research landscape in the participating countries.
CI SONO MOLTE FOTO INTERESSANTI LE METTI TU?