Quantcast
Channel: career
Viewing all 1643 articles
Browse latest View live

More schools, more challenging assignments add up to higher IQ scores 03-31

$
0
0

More schools, more challenging assignments add up to higher IQ scores




More schooling — and the more mentally challenging problems tackled in those schools — may be the best explanation for the dramatic rise in IQ scores during the past century, often referred to as the Flynn Effect, according to a team of researchers. These findings also suggest that environment may have a stronger influence on intelligence than many genetic determinists once thought.
Researchers have struggled to explain why IQ scores for developed nations — and, now, developing nations — have increased so rapidly during the 20th century, said David Baker, professor of sociology and education, Penn State. Mean IQ test scores of American adults, for instance, have increased by about 25 points over the last 90 years.
“There’ve been a lot of hypotheses put forward for the cause of the Flynn Effect, such as genetics and nutrition, but they generally fall flat,” said Baker. “It really begged the question of whether an environmental factor, or factors, could cause these gains in IQ scores.”
School enrollment in the United States reached almost 90 percent by 1960. However, the researchers, who report their findings in the current issue ofIntelligence, suggest that it is not just increasing attendance, but also the more challenging learning environment that are reasons behind the IQ score rise.
“If you look at a chart of the Flynn Effect over the 20th century in the United States, for example, you notice that the proportion of children and youth attending school and how long they attend lines up nicely with the gains in IQ scores,” said Baker. “As people went to school, what they did there likely had a profound influence on brain development and thinking skills, beyond just learning the three R’s. This is what our neurological and cognitive research shows.”
He added that over the century, as as a higher percentage of children from each new generation went to school and attended for more years, this produced rising IQ scores.
“Even after full enrollments were achieved in the U.S. by about the 1960s, school continued to intensify its influence on thinking,” said Baker.
While even basic schooling activities can shape brain development, over the past century, schools have moved from learning focused on memorization to lessons that require problem solving and abstract thinking skills, which are often considered functions of fluid intelligence, Baker said.
“Many like to think that schooling has become ‘dumbed down,’ but this is not true,” said Baker. “This misperception has tended to lead cognitive scientists away from considering the impact of schooling and its spread over time as a main social environment in neurological development.”
Just as more physical exercise can improve sports performance for athletes, these more challenging mental workouts in schools may be building up students’ mental muscles, he added, allowing them to perform better on certain types of problems that require flexible thinking and abstract problem solving, such as IQ tests.
“Certain kinds of activities — like solving problems, or reading — stimulate the parts of the brain that we know are responsible for fluid intelligence,” said Baker. “And these types of activities are done over and over in today’s schools, so that you would expect these students to have higher development than populations of people who had no access to schooling.”
Students must not only solve more challenging problems, they must use multiple strategies to find solutions, which adds to the mental workout in today’s schools, according to Baker.
The researchers conducted three studies, from neurological, cognitive and demographic perspectives, according to Baker.
He said that genetics alone could not explain the Flynn Effect. Natural selection happens too slowly to be the sole reason for rising IQ scores. This suggests that intelligence is a combination of both genetics and environment.
“The best neuroscience is now arguing that brains of mammals, including, of course, humans, develop in this heavy genetic-environmental dependent way, so it’s not an either-or situation,” said Baker. “There’s a high genetic component, just like there is for athletic ability, but the environment can enhance people’s abilities up to unknown genetic limits.”
In the first study, the researchers used functional Magnetic Resonance Imaging to measure brain activity in children solving certain math problems. They found that problems typical of today’s schooling activated areas of the brain known as centers of fluid intelligence, for instance, mathematical problem solving.
A field study was also conducted in farming communities in Peru where education has only recently become fully accessible. The survey showed that schooling was a significant influence on improved cognitive functioning.
To measure the challenge level of lessons, the researchers analyzed more than 28,000 pages of content in textbooks published from 1930 to 2000. They measured, for example, whether students were required to learn multiple strategies to find solutions or needed other mental skills to solve problems.

Milk could be good for your brain 03-31

$
0
0


Milk could be good for your brain





New research conducted at the University of Kansas Medical Center has found a correlation between milk consumption and the levels of a naturally-occurring antioxidant called glutathione in the brain in older, healthy adults.
In-Young Choi, Ph.D., an associate professor of neurology at KU Medical Center, and Debra Sullivan, Ph.D., professor and chair of dietetics and nutrition at KU Medical Center, worked together on the project. Their research, which was published in the Feb. 3, 2015 edition of The American Journal of Clinical Nutrition, suggests a new way that drinking milk could benefit the body.“We have long thought of milk as being very important for your bones and very important for your muscles,” Sullivan said. “This study suggests that it could be important for your brain as well.”
Choi’s team asked the 60 participants in the study about their diets in the days leading up to brain scans, which they used to monitor levels of glutathione – a powerful antioxidant – in the brain.
The researchers found that participants who had indicated they had drunk milk recently had higher levels of glutathione in their brains. This is important, the researchers said, because glutathione could help stave off oxidative stress and the resulting damage caused by reactive chemical compounds produced during the normal metabolic process in the brain. Oxidative stress is known to be associated with a number of different diseases and conditions, including Alzheimer’s disease, Parkinson’s disease and many other conditions, said Dr. Choi.
“You can basically think of this damage like the buildup of rust on your car,” Sullivan said. “If left alone for a long time, the buildup increases and it can cause damaging effects.
Few Americans reach the recommended daily intake of three dairy servings per day, Sullivan said. The new study showed that the closer older adults came to those servings, the higher their levels of glutathione were.
“If we can find a way to fight this by instituting lifestyle changes including diet and exercise, it could have major implications for brain health,” Choi said.
An editorial in the same edition of The American Journal of Clinical Nutrition said the study presented “a provocative new benefit of the consumption of milk in older individuals,” and served as a starting point for further study of the issue.
“Antioxidants are a built-in defense system for our body to fight against this damage, and the levels of antioxidants in our brain can be regulated by various factors such as diseases and lifestyle choices,” Choi said.
For the study, researchers used high-tech brain scanning equipment housed at KU Medical Center’s Hoglund Brain Imaging Center. “Our equipment enables us to understand complex processes occurring that are related to health and disease,” Choi said. “The advanced magnetic resonance technology allowed us to be in a unique position to get the best pictures of what was going on in the brain.”
A randomized, controlled trial that seeks to determine the precise effect of milk consumption on the brain is still needed and is a logical next step to this study, the researchers said.

NASA's Curiosity Rover Finds Biologically Useful Nitrogen on Mars 03-31

$
0
0
NASA's Curiosity Rover Finds Biologically Useful Nitrogen on Mars



A team using the Sample Analysis at Mars (SAM) instrument suite aboard NASA's Curiosity rover has made the first detection of nitrogen on the surface of Mars from release during heating of Martian sediments. The nitrogen was detected in the form of nitric oxide, and could be released from the breakdown of nitrates during heating. Nitrates are a class of molecules that contain nitrogen in a form that can be used by living organisms. The discovery adds to the evidence that ancient Mars was habitable for life.
Nitrogen is essential for all known forms of life, since it is used in the building blocks of larger molecules like DNA and RNA, which encode the genetic instructions for life, and proteins, which are used to build structures like hair and nails, and to speed up or regulate chemical reactions.
However, on Earth and Mars, atmospheric nitrogen is locked up as nitrogen gas (N2) – two atoms of nitrogen bound together so strongly that they do not react easily with other molecules. The nitrogen atoms have to be separated or "fixed" so they can participate in the chemical reactions needed for life. On Earth, certain organisms are capable of fixing atmospheric nitrogen and this process is critical for metabolic activity. However, smaller amounts of nitrogen are also fixed by energetic events like lightning strikes.
Nitrate (NO3) – a nitrogen atom bound to three oxygen atoms – is a source of fixed nitrogen. A nitrate molecule can join with various other atoms and molecules; this class of molecules is known as nitrates.
There is no evidence to suggest that the fixed nitrogen molecules found by the team were created by life. The surface of Mars is inhospitable for known forms of life. Instead, the team thinks the nitrates are ancient, and likely came from non-biological processes like meteorite impacts and lightning in Mars' distant past.
Features resembling dry riverbeds and the discovery of minerals that only form in the presence of liquid water suggest that Mars was more hospitable in the remote past. The Curiosity team has found evidence that other ingredients needed for life, such as liquid water and organic matter, were present on Mars at the Curiosity site in Gale Crater billions of years ago.
"Finding a biochemically accessible form of nitrogen is more support for the ancient Martian environment at Gale Crater being habitable," said Jennifer Stern of NASA's Goddard Space Flight Center in Greenbelt, Maryland. Stern is lead author of a paper on this research published online in the Proceedings of the National Academy of Science March 23.
The team found evidence for nitrates in scooped samples of windblown sand and dust at the "Rocknest" site, and in samples drilled from mudstone at the "John Klein" and "Cumberland" drill sites in Yellowknife Bay. Since the Rocknest sample is a combination of dust blown in from distant regions on Mars and more locally sourced materials, the nitrates are likely to be widespread across Mars, according to Stern. The results support the equivalent of up to 1,100 parts per million nitrates in the Martian soil from the drill sites. The team thinks the mudstone at Yellowknife Bay formed from sediment deposited at the bottom of a lake. Previously the rover team described the evidence for an ancient, habitable environment there: fresh water, key chemical elements required by life, such as carbon, and potential energy sources to drive metabolism in simple organisms.
The samples were first heated to release molecules bound to the Martian soil, then portions of the gases released were diverted to the SAM instruments for analysis. Various nitrogen-bearing compounds were identified with two instruments: a mass spectrometer, which uses electric fields to identify molecules by their signature masses, and a gas chromatograph, which separates molecules based on the time they take to travel through a small glass capillary tube -- certain molecules interact with the sides of the tube more readily and thus travel more slowly.
Along with other nitrogen compounds, the instruments detected nitric oxide (NO -- one atom of nitrogen bound to an oxygen atom) in samples from all three sites. Since nitrate is a nitrogen atom bound to three oxygen atoms, the team thinks most of the NO likely came from nitrate which decomposed as the samples were heated for analysis. Certain compounds in the SAM instrument can also release nitrogen as samples are heated; however, the amount of NO found is more than twice what could be produced by SAM in the most extreme and unrealistic scenario, according to Stern. This leads the team to think that nitrates really are present on Mars, and the abundance estimates reported have been adjusted to reflect this potential additional source.
"Scientists have long thought that nitrates would be produced on Mars from the energy released in meteorite impacts, and the amounts we found agree well with estimates from this process," said Stern.
The SAM instrument suite was built at NASA Goddard with significant elements provided by industry, university, and national and international NASA partners. NASA's Mars Science Laboratory Project is using Curiosity to assess ancient habitable environments and major changes in Martian environmental conditions. NASA's Jet Propulsion Laboratory in Pasadena, California, a division of Caltech, built the rover and manages the project for NASA's Science Mission Directorate in Washington. The NASA Mars Exploration Program and Goddard Space Flight Center provided support for the development and operation of SAM. SAM-Gas Chromatograph was supported by funds from the French Space Agency (CNES). Data from these SAM experiments are archived in the  Planetary Data System (pds.nasa.gov).

Understanding Quantum Mechanics: What is Electromagnetism? 03-31

$
0
0

Understanding Quantum Mechanics: What is Electromagnetism?




On the face of it, both electricity and magnetism are remarkably similar to gravity. Just as two masses are attracted to each other by an inverse square force, the force between two charged objects or two poles of a magnet  are also inverse square. The difference is that gravity is always attractive, whereas electricity and magnetism can be either attractive or repulsive.For example, two positive charges will push away from each other, while a positive and negative charge will pull toward each other.
As with gravity, electricity and magnetism raised the question of action-at-a-distance. How does one charge “know” to be pushed or pulled by the other charge? How do they interact across the empty space between them? The answer to that question came from James Clerk Maxwell.
Maxwell’s breakthrough was to change the way we thought electromagnetic forces. His idea was that each charge must reach out to each other with some kind of energy. That is, a charge is surrounded by a field of electricity, a field that other charges can detect. Charges possess electric fields, and charges interact with the electric fields of other charges. The same must be true of magnets. Magnets possess magnetic fields, and interact with magnetic fields. Maxwell’s model was not just a description of the force between charges and magnets, but a also description of the electric and magnetic fields themselves. With that change of view, Maxwell found the connection between electricity and magnetism. They were connected by their fields.
A moving electric field creates a magnetic field, and a moving magnetic field creates an electric field. Not only are the two connected, but one type of field can create the other. Maxwell had created a single, unified description of electricity and magnetism. He had united two different forces into a single unified force, which we now call electromagnetism.
Maxwell’s theory not only revolutionized physics, it gave astrophysics the tools to finally understand some of the complex behavior of interstellar space. By the mid-1900s Maxwell’s equations were combined with the Navier-Stokes equations describing fluids to create magnetohydrodynamics (MHD). Using MHD we could finally begin to model the behavior of plasma within magnetic fields, which is central to our understanding of everything from the Sun to the formation of stars and planets. As our computational powers grew, we were able to create simulations of protostars and young planets.
Although there are still many unanswered questions, we now know that the dance of plasma and electromagnetism plays a crucial role in the formation of stars and planets.
While Maxwell’s electromagnetism is an incredibly powerful theory, it is a classical model just like Newton’s gravity and general relativity. But unlike gravity, electromagnetism could be combined with quantum theory to create a fully quantum model known as quantum electrodynamics (QED).
A central idea of quantum theory is a duality between particle-like and wave-like (or field-like) behavior. Just has electrons and protons can interact as fields, the electromagnetic field can interact as particle-like quanta we call photons. In QED, charges and a electromagnetic fields are described as interactions of quanta. This is most famously done through Richard Feynman’s figure-based approach now known as Feynman diagrams.
Feynman diagrams are often mis-understood to represent what is actually happening when charges interact. For example, two electrons approach each other, exchange a photon, and then move away from each other. Or the idea that virtual particles can pop in and out of existence in real time. While the diagrams are easy to understand as particle interactions, they are still quanta, and still subject to quantum theory. How they are actually used in QED is to calculate all the possible ways that charges could interact through the electromagnetic field in order to determine the probability of a certain outcome. Treating all these possibilities as happening in real time is like arguing that five apples on a table become real one at a time as you count them.
QED has become the most accurate physical model we’ve devised so far, but this theoretical power comes at the cost of losing the intuitive concept of a force.
Feynman’s interactions can be used to calculate the force between charges, just as Einstein’s spacetime curvature can be used to calculate the force between masses. But QED also allows for interactions that aren’t forces. An electron can emit a photon in order to change its direction, and an electron and positron can interact to produce a pair of photons. In QED matter can become energy and energy can be come matter.
What started as a simple force has become a fairy dance of charge and light. Through this dance we left the classical world and moved forward in search of the strong and the weak.

How the brain can distinguish good from bad smells 04-03

$
0
0

How the brain can distinguish good from bad smells




Scientists from Max Planck Institute for Chemical Ecology in Jena, Germany, have found that in fruit flies, the quality and intensity of odors can be mapped in the so-called lateral horn. They have created a spatial map of this part of the olfactory processing system in the fly brain and showed that the lateral horn can be segregated into three activity domains, each of which represents an odor category.
Whether an odor is pleasant or disgusting to an organism is not just a matter of taste. Often, an organism's survival depends on its ability to make just such a discrimination, because odors can provide important information about food sources, oviposition sites or suitable mates. However, odor sources can also be signs of lethal hazards. Scientists from the BMBF Research Group Olfactory Coding at the Max Planck Institute for Chemical Ecology in Jena, Germany, have now found that in fruit flies, the quality and intensity of odors can be mapped in the so-called lateral horn.
They have created a spatial map of this part of the olfactory processing system in the fly brain and showed that the lateral horn can be segregated into three activity domains, each of which represents an odor category. The categories are good versus bad, as well as weak versus strong smells. These categorizations have a direct impact on the behavior of the flies, suggesting that the function of the lateral horn is similar to that of the amygdala in the brains of vertebrates. The amygdala plays a crucial role in the evaluation of sensory impressions and dangers and the lateral horn may also. (eLife, December 2014)
In order to survive, organisms must be able to sense information in their environment and to adapt their behavior accordingly. Animals use their senses, such as vision and olfaction, to perceive visual cues or odors in their surroundings and to process and evaluate the information that is sent via these senses to their brains. They must be able to tell good from bad odors. Good odors are important signals when animals search for food or a mating partner. Female insects also use olfactory signals to select a good oviposition place. Bad smells, on the other hand, can signal danger, for example, rotten and toxic food.
How the brain can distinguish good from bad smells
Representations of odors in the fly brain can be studied by using functional imaging techniques. Interestingly, an attractive odor activates a different region in the lateral horn than the region activated by a repulsive odor. These regions are the same in every fly and therefore genetically determined. Credit: Silke Sachse, , Max Planck Institute for Chemical Ecology
Modern functional imaging methods show that these sensory perceptions cause certain response patterns in the brain: Depending on the processed information, specific brain areas are activated. If an odor is rated as pleasant or disgusting, this classification method is called "hedonic valence." Studies of fruit flies revealed that odor features which could be characterized according to the scales of hedonic valence and odor intensity excited activity in a higher region of the brain, namely, the lateral horn. Depending on whether an odor was categorized as good or bad, strong or weak, brain activity could be made visible in spatially segregated regions of the lateral horn.
"We were very surprised to find that the lateral horn, which is a brain region as big as the antennal lobe (i.e. the olfactory bulb of insects), can be segregated into only three activity domains, while the antennal lobe consists of about 50 functional units," Silke Sachse, head of the BMBF Research Group, summarized. "Our results show that the higher brain is representing categories of odors, i.e. good versus bad odors, rather than the identity of an odor, which is represented at a lower processing stage such as the antenna and the antennal lobe."
Like many other sensory networks, the olfactory circuit of the fly contains spatially distinct pathways to the higher brain consisting of excitatory and inhibitory projection neurons. Projection neurons are nerve cells that transmit sensory signals to other regions of the nervous system. Notably, the inhibitory projections, which were examined in this study, convey olfactory information from the antennal lobe, the first processing center, to the lateral horn exclusively and bypass the mushroom body, which is the center for learning and memory. The inhibitory projection neurons can be subdivided into two morphological groups: the first subset processes information about whether an odor is attractive or repulsive, and the second subset processes information about the intensity of an odor. To test the functionality of these neurons, the scientists worked with flies in which these neurons had been silenced.


"Interestingly, we observed that the flies did not show attraction to any odor anymore but suddenly avoided highly attractive odors such as e.g. balsamic vinegar. We therefore concluded that the inhibitory projection neurons mediate attraction to odors and therefore enable a fly to find its food source or oviposition site," Silke Sachse explains. Moreover, the scientists were able to identify higher-order neurons in the lateral horn, that is, neurons that exclusively represent repulsive odors. These neurons communicate with inhibitory projection neurons via synapses.
The researchers believe that the function of the lateral horn in fruit flies can be compared to that of the amygdala − two almond-shaped nuclei − in the brain of vertebrates. In humans, the amygdala plays a primary role in the emotional evaluation of situations and the assessment of risks. If the amygdala is damaged, humans fail to show fear or aggression. However, lesions in the amygdala also prevent vital flight or defense reactions from being triggered. The scientists hypothesize that damage to the lateral horn may have similar effects on fruit flies. However, this assumption is so far speculative because the lateral horn could not be selectively inactivated.
In their study, the Max Planck researchers identified the lateral horn as the processing center for odor information that triggers innate odor-guided behavior. Good and bad odors are spatially decoded in different regions of the lateral horn. Further experiments will be needed to find out how this spatial map is finally transformed into the insect's decision to act. The scientists are currently in the process of identifying higher-order neurons in the lateral horn to complete the olfactory circuitry from the periphery up to the brain centers where the decision is taking place that leads to purposive odor-guided behavior. [AO/SS]

How Apps, Wearables, And NanoTechnology Are Revolutionizing Healthcare 04-03

$
0
0

How Apps, Wearables, And NanoTechnology Are Revolutionizing Healthcare


Today, massive technological shifts – driven by Big Data, mobility, security and cloud computing – are rapidly transforming business and society. Entire industries are being completely transformed, and healthcare is one of them. These trends are unlocking new possibilities for hospitals, researchers, doctors and patients. Perhaps wearable in the future of healthcareeasily predicted, because innovations in healthcare are critical, technology advancements are setting exciting new benchmarks for further innovation, but also these innovations are saving countless lives all over the world. While massive amounts of data (Big Data) are enabling better diagnosis and predictions, applications, wearables, and nanotech are revolutionizing healthcare by empowering the consumer to take care of themselves and to perform better in their personal and professional lives. After all, if we don’t have our health, what are we left with? With so many advancements already achieved and the growing desire to take our own health by the horns, healthcare IT is fast turning into the favorite child of tech innovation.

Applications are changing the patient/doctor relationship

In today’s health-conscious world, tools like Fitbit and other personal monitoring tools are becoming ubiquitous, allowing us to track our fitness activities, sleep patterns, blood pressure and caloric intake. The trend to track these lifestyle variables has changed the dynamics of the traditional patient-doctor relationship, and has opened the doors to more advanced forms of treatment and care through remote monitoring, telemedicine, etc. In-clinic diagnostic treatment is giving way to virtual consultation, and homecare might replace hospital care very soon as we move to turn patient/doctor relationships into virtual ones.
2014 witnessed widespread adoption of telemedicine across healthcare facilities in the U.S, while the field has become even more accessible with healthcare retail giants like Walmart, Walgreens, and CVS stepping into the telemedicine game. New telemedicine kiosks have made their way into retail spaces and mobile apps have been developed with the aim of putting healthcare into the hands of the patients. Not just for diagnosing common illnesses, apps are also being developed to manage critical diseases. For instance, a new Ebola Care App is helping caregivers and Ebola workers handle key areas of disease management such as data collection, analysis, and response.

Wearables and nanotech: The future of healthcare is here

While applications and tools are enabling self-monitoring of health, more sophisticated devices and technologies are also capable of delivering the data generated to healthcare professionals, who can process it to predict and prevent bigger health concerns in the future. Wearable devices are playing a major role in transferring actionable data from patients to doctors and caregivers, even employers. As a recent example, Google X Lab has partnered with Novartis to design contact lenses that track glucose levels in the wearer’s tears and transfers that information to a mobile device that the doctor uses for monitoring.
Another Google X project is the use of nanotechnology for detecting cancer. This project aimed at developing the nanoparticles Pill, which when ingested will run through the bloodstream, detects any abnormalities that suggest the presence of cancer. The data generated will be transmitted to doctors through a wearable device. It may be a revolutionary innovation for detecting and eradicating life-threatening diseases in their early stages.
The way disruptive technologies are creating seismic shifts in the healthcare landscape, it’s not hard to predict that we are in the midst of a healthcare revolution that empowers us more than ever to manage our lives to perform better on the field, at work, or in our home. Our ability to track our diet, exercise, daily activity, and sleep are giving us the opportunity to better understand how we can be at our best. As we gain insight, we’ll be provided with data (Big Data) that will not only allow us to make better decisions about how we live, but will give our doctor’s insight on how to save lives, and will even allow companies to better plan technology and healthcare for their employees and determine the best ways to provide Life-Work balance that brings out the very best in us.




View at the original source

New York’s Complicated Teacher Evaluation Proposal Is The Exception, Not The Rule 04-16

$
0
0

New York’s Complicated Teacher Evaluation Proposal Is The Exception, Not The Rule
This week, millions of students across New York will begin taking the state’s annual round of standardized tests, starting with the Common Core English Language Arts test on Tuesday. But it’s not just students’ performance that will be graded — teachers are being evaluated just as much, if not more.
While 30 states require test scores to be considered in teacher evaluations, New York stands out in just how much weight it may soon be putting on the scores. In March, the state legislature approved a hotly contested measure backed by Gov. Andrew Cuomo that takes away district control of teacher evaluations and could allow the education commissioner to use test scores to determine as much as half of a teacher’s evaluation, giving test scores and observations the same weight. Legislators seem confused about the practical implications of the bill and are investigating the language. The education department has until June 30 to finalize the evaluation system.
The bill is just the latest in a flurry of teacher evaluation policy reforms that have been implemented across the country in recent years. New York had already reformed its teacher evaluations in 2012, but after 96 percent of teachers were deemed effective last year, Cuomo is looking to rework the system again.
According to the Center on Great Teachers and Leaders at American Institutes for Research, a nonprofit behavioral and social science research organization, every state but California has developed plans for a new system to be implemented at some point between 2011 and 2019. These new models of teacher evaluation include state tests, district tests, classroom observations, student and parent surveys, self-assessments from teachers, lesson-plan reviews, measures of professional learning, and more.
All these attempts to reform teacher evaluations are happening as the research on how best to assess teacher quality remains in its earliest stages, and teachers and students are being affected with every shift in policy. “We’re in the Wright brothers stage,” said Jim Hull, a senior policy analyst for the Center for Public Education. “We’re getting these policies in the air, and they’re not designed to be as effective as they probably could be. There’s still a lot to learn.”
One New York principal, Carol Burris of South Side High School in Rockville Centre, who is also a fellow with the nonprofit National Education Policy Center and who has been an outspoken opponent of high-stakes testing, said that New York’s policy will affect students for the worse.
“It puts the test scores in a pre-eminent position, which will have a bad effect on kids,” she said. “We’ve already seen the effects that this has — a narrowing of curriculum and teaching to the test — and that will just become an even more prevalent practice.”
In Burris’s ideal world, test scores would play at most a secondary role in teacher evaluations. “I think the role of test scores should be outside the model. Teachers should be evaluated by their principal, and if for some reason that evaluation is way off the mark, then decent test scores can be used as a form of protection to challenge that evaluation,” she said.
Hull understands her anxiety about the new policy. “There’s definitely room for concern when you talk about 50 percent of an evaluation being based on test scores,” he said. “Especially if it’s not a high-quality assessment [of the students] and doesn’t assess a broad range of knowledge.” The quality of student tests is a particularly sticky issue in New York, where the first two years of Common Core test results, which have shown outsized gaps between demographic groups, have done little to ease critics’ concerns that scoring well on the tests requires skills that are irrelevant to the subject matter being tested.
There are competing views on the best way to approach incorporating student test scores into teacher evaluations. The model that’s currently accepted as the most accurate is known as the “value-added model” because it’s supposed to measure exactly how much of a student’s progress made over the course of the year is thanks to her teacher. It compares each student’s test scores to their own past scores and to other students’ scores. “It’s not perfect, but there is no other measure as good as [the value-added model],” Hull said. “But that’s not to say that other measures shouldn’t be used as well. You really want to have multiple measures of student outcomes.”
The Measures of Effective Teaching (MET) Project, the magnum opus of teacher-evaluation studies — every expert I spoke with on this topic pointed me to it — confirms the idea that student growth, often in the form of test scores, should be included in evaluations, along with other metrics. The project, which was funded by the Bill & Melinda Gates Foundation, was a three-year research partnership between 3,000 teacher volunteers and dozens of independent research teams who together developed and tested multiple measures of teacher evaluations to help better identify effective teaching. It didn’t recommend one specific model, given that every district has its own needs, but it did highlight the fact that “heavily weighting a single measure may incentivize teachers to focus too narrowly on a single aspect of effective teaching and neglect its other important aspects.”
The project found that an evaluation model heavily weighting standardized test scores was more effective than other models in predicting improvements in how students performed on those tests, but was the least predictive of how students performed on supplemental higher-order tests (usually given at the district or school level), and also produced the least reliable results, based on year-to-year consistency in teachers’ scores. Here are the results, from the project’s research brief:


Texas is in the process of overhauling its evaluation system, and it will value test scores at a maximum of 20 percent — even less than the MET Project’s Model 4. Specifically, Texas’s new model will count “student growth” as 20 percent of a teacher’s score. “But there are four options that districts have for measuring student growth, and just one of them ties back to test scores,” said Tim Regal, the director of educator evaluation and support for the Texas Education Agency.
Texas’s new model, which will replace a 17-year-old system, is in its pilot year and being implemented in 57 districts. Next year, it will expand to 200 districts before the final statewide implementation in 2016-2017.
But rather than just changing the way different factors are weighted, the overhaul is an attempt to transform the way teacher evaluations are used and understood. “Right now, when you talk to teachers in Texas about appraisal, they have a certain mindset because it’s always been used just to determine renewal and nonrenewal,” he said, referring to the common practice of using evaluations solely to determine which teachers to let go. “The only thing that mattered was their score. A lot of our focus during the pilot year has been about how to change that mindset to be about a process of professional development and growth for all teachers.”
In addition to the 20 percent of teachers’ evaluations that will be based on student growth, another 70 percent will be based on observations from principals, assistant principals and other campus leaders, and 10 percent will be based on self-assessment.
According to Angela Minnici, director of the Center on Great Teachers and Leaders, a change in mindset about how teacher evaluations are used is necessary. “We need to move away from thinking about evaluation systems as only ways to dismiss underperformers,” she said. “We really should be thinking about evaluations as a source of power and investment for moving teachers to a better level of performance overall.”
While New York is putting even more emphasis on testing, most states are moving toward a system more like the one in Texas.
“States are definitely making a big step forward,” Hull said. “Over a short period of time, evaluation systems have gone basically from an exercise in bureaucracy to being given immediate feedback and information on how to improve performance. That type of feedback is how we get the best doctors, best lawyers, and it’s also how we’ll get the best teachers.”

Laser Li-Fi Could Blast 100 Gigabits per Second 04-16

$
0
0

Laser Li-Fi Could Blast 100 Gigabits per Second





Struggling with spotty Wi-Fi? Li-Fi  might be the answer. The technology uses LED-based room lighting instead of radio waves to transmit data. But one of the leading Li-Fi proponents is already looking beyond LEDs to laser-based lighting, which he says could bring a tenfold increase in data rates.
“The problem is that LEDs, although they are more energy efficient than incandescent light, they still can be improved in terms of their light output,” says Harald Haas, chair of mobile communications at the University of Edinburgh and a member of the Ultra-Parallel Visible Light Communications Project. “We strongly believe the next wave of energy efficient lighting will be based on laser diodes.”
Li-Fi encodes data on the light coming from LEDs by modulating their output. The rapid flickering is unnoticeable to the human eye, but a receiver on a desktop computer or mobile device can read the signal, and even send one back to a transceiver on the ceiling of a room, providing two-way communication. But many LEDs use a phosphor coating to convert blue light to white, and that limits how fast the devices can be modulated, holding down data rates.
In research published in Optics Express, Haas and his team showed that replacing the LEDs with off-the-shelf laser diodes vastly improved the situation. Lasers, with their high energy and optical efficiency, can be modulated at 10 times the rate of LEDs. And rather than using phosphors, laser lighting would create white light by mixing the output of several lasers operating at different wavelengths. That means each wavelength can be used as a separate data channel, the same sort of wavelength division multiplexing that lets optical telecommunications carry so much data. The Edinburgh group’s experiment used nine laser diodes.
While LED-based Li-Fi could reach data rates of 10 Gb/s, an improvement over the 7 Gb/s maximum of Wi-Fi, using lasers could boost that speed to “easily beyond 100 Gb/s,” Haas says.
At the moment, such a setup is expensive, but Haas believes mass production will bring down the cost of the lasers and move them into lighting applications. BMW already sells laser-based headlights on its i8 model. “That is only the start of a technology move as laser diodes get more inexpensive,” Haas says.


Software AG and Wipro in big data pact for IoT 04-16

$
0
0

Software AG and Wipro in big data pact for IoT

Wipro has combined Software AG’s Big Data Streaming Analytics portfolio with its own IoT architectural framework.



Software AG and IT services firm Wipro have integrated their analytic tools to develop real-time intelligent information for the Internet of Things (IoT) market.

Wipro has combined Software AG's Big Data Streaming Analytics portfolio with its own IoT architectural framework, the Wipro Looking Glass, to form a streaming analytics IoT platform.

The platform is designed to allow enterprises to understand how customers use their products in real-time, across industries such as financial services, manufacturing and telecommunication.

"The key to successfully addressing the IoT market is the ability to rapidly build and evolve apps that tap into, analyse and make smart decisions on fast, big data", said Software AG's CMO John Bates.

"This partnership addresses these requirements by combining Wipro's dedicated Software AG product practice, support services and a sharp industry focus with Software AG's industry leading Streaming Analytics product suite - together delivering a unique and powerful streaming analytics solution platform."

Alan Atkins, VP and Global Head of IoT at Wipro, added: "Wipro offers end-to-end services for IoT, enabled through business and technology frameworks to address the business challenges of the customers.

"Analytics and Visualisation form important elements of this value proposition and the partnership with Software AG compliments the other offerings in the framework that we enable for the customers"

The World’s Most Tech-Ready Countries 2015 04-16

$
0
0

The World’s Most Tech-Ready Countries 2015





Those able to harness the power of information and communication technology are reaping ever more benefits. But in poor countries, digital poverty is holding back growth and development, leaving them further behind.


Singapore is this year’s leader of the global ICT revolution. Its government has a clear digital strategy and is an exemplar of online services and e-participation tools, which filters down to its industries and population. The country has the highest penetration of mobile broadband subscriptions per capita in the world and more than half of the population is employed in knowledge-intensive jobs. 
The country topped this year’s Global Information Technology Report (GITR), published by INSEAD in partnership with the World Economic Forum and Johnson Cornell University, due to its leadership in business, innovation environment and government usage of ICT.
The report benchmarks 143 economies in terms of their capacity to prepare for, use and leverage ICT.

What gives these high income countries their “networked readiness advantage” is their education systems and concerted policy efforts to facilitate innovation and commerce, which allows them to take advantage of digital innovation in a way that emerging economies cannot.
Hold the champagne 
From a global point of view, however, we see reasons for concern. Not only does this year’s report show that the world’s emerging economies are failing to exploit the potential of ICTs to drive social and economic transformation, but the gap between the digital haves and have nots is increasing. Those in the top 10 percent of the ranking have seen twice the level of improvement since 2012 as those in the bottom 10 percent.  
This is also reflected by progress among the world’s largest emerging markets. Lately, their journey towards network readiness has been disappointing. While the Russian Federation is highest-placed among the BRICS nations, climbing nine places to 41 and China remains at 62, all other members of the group have declined. India dropped six places to 89, Brazil dropped 15 to 84 and South Africa is 75th, down five spots.
And the example of the BRICS is not unique. Many other emerging countries that have improved their networked readiness over the last decade or so are now facing stagnation or regression. Indonesia is one such example which has fallen 15 places this year down to 79th position from 64 last year. This is partly due to persistent divides within these countries between their rural and urban areas and across income groups, which leaves a large portion of the population out of the digital revolution.
Emerging exemplars
Despite the top 30 places being dominated by high-income countries, there are a number of other countries in which ICT is being used to stimulate growth and reduce inequality.  Here, governments are using a number of instruments in their toolkits such as balancing  liberalisation and regulation to stimulate healthy competition across their economies.
Among those that have made considerable improvements in terms of their ranking are Lithuania (31st)), Malaysia (32nd) and Latvia (33rd).  Other examples of countries hitting above their weight include Caucasian countries – Kazakhstan (40th), Armenia (58th) and Georgia (60th) – as well as Mauritius (45th) which is far ahead of the other sub-Saharan African countries.   Even so, in a number of sub-Saharan countries, both large and small, progress is being made where their ICT markets have been liberalised. For example, Kenya, Nigeria and Tanzania are all beginning to see the benefits of market reforms as well as smaller economies like Cape Verde, Lesotho or Madagascar.
Low hanging fruits
There is no doubt that technological innovation is influencing lives in all types of economies – social media are changing the ways in which we interact with each other at an individual level and at an industry level. Big data is being used to create new products and generate new markets.   In developing countries, however, information and communication technologies (ICTs) are even more fundamental to reducing inequalities, to taking people out of poverty and to creating jobs. 
The internet remains nonexistent, scarce, unaffordable or too slow in vast swathes of the developing world. While internet penetration has been growing, its growth has slowed lately. A fresh internet revolution is needed to connect the ‘next two billion people who still do not have online access’. This will require concerted efforts to extend mobile broadband to large parts of the developing world.
To achieve an internet revolution and bridge the digital divide, developing countries must consider long-term investments in infrastructure and education. Governments can accelerate the process through sound regulation and more intense competition.
If fostered properly, ICT can transform economies through productivity gains, reducing information costs, allowing new models of collaboration and changing the way people work. ICT fosters entrepreneurship and wealth creation. But widespread ICT use by businesses, government and the population at large is a pre-condition for all these benefits and opportunities to materialise and to be accessible to the largest numbers. Inclusive growth is indeed within reach, and ICTs can play a critical role in making it a global reality.
The Global Information Technology Report uses a combination of data from publicly available sources and the results of the Executive Opinion Survey of more than 13,000 executives conducted by the World Economic Forum and partners.  It gauges usage, socio-economic impact, political and regulatory environments and the climate for business and innovation as well as ICT infrastructure, affordability and ICT skills.

View at the original source

India successfully test fires nuclear-capable Agni III ballistic missile 04-16

$
0
0

India successfully test fires nuclear-capable Agni III ballistic missile




India on Thursday successfully test fired its nuclear-capable Agni-III ballistic missile with a strike range of more than 3,000 km from Wheeler Island off Odisha coast.
The indigenously developed surface-to-surface missile was test fired from a mobile launcher at launch complex-4 of the Integrated Test Range (ITR) at Wheeler Island by army at about 0955 hrs, defence sources said.
“The trial, carried out by the Strategic Forces Command (SFC of the Indian Army), was fully successful,” ITR Director M V K V Prasad told PTI.
Logistic support for the test was provided by the Defence Research and Development Organisation (DRDO). “It was the third user trial in the Agni-III series carried out to establish the ‘repeatability’ of the missile’s performance,” a DRDO official said.
For data analyses, the entire trajectory of today’s trial was monitored through various telemetry stations, electro-
optic systems and sophisticated radars located along the coast and by naval ships anchored near the impact point, the sources said.
The Agni-III missile is powered by a two-stage solid propellant system. With a length of 17 metres, the missile’s diameter is 2 metres and launch weight is around 50 tonnes.
It can carry a warhead of 1.5 tonne which is protected by carbon all composite heat shield.
The sleek missile, already inducted into the armed forces, is equipped with hybrid navigation, guidance and control systems along with advanced on board computer.
The electronic systems connected with the missile are hardened for higher vibration, thermal and acoustic effects, a DRDO scientist said.
Though the first developmental trial of Agni-III carried out on July 9, 2006 could not provide desired result,
subsequent tests on April 12, 2007, May 7, 2008, February 7, 2010 as well as the first user trial on September 21, 2012 and next on December 23, 2013 from the same base were all successful.

Inject Novelty into Your Innovations 04-16

$
0
0

Inject Novelty into Your Innovations







Making sure customers are familiar with your products is essential, but so is a strong dose of occasional novelty.



What we find most attractive are things that look familiar, including people and faces. Our species needed this trait to survive because we needed to separate friend from foe. At the same time, novelty is the spice of life. These two dimensions – familiarity and novelty – create an interesting trade-off for managers regarding innovation: should they launch product updates that lower familiarity but increase the novelty?
The entertainment industry—TV shows, video games, and mobile apps— frequently introduces new content, so that the “story line” evolves and feels new, while keeping some familiarity with the characters. The Serial TV show “24” or video game “World of Warcraft” are good examples; each TV episode or game expansion adds to the understanding of the story line, feeding consumer loyalty and interest, while maintaining well-known and popular features. 
In our paper, The Impact of Innovation and Social Interactions on Product Usage, Yulia Nevskaya (Washington University in St Louis) and I study these managerial problems by building a model that explains how consumers react to products updates. We tracked the usage experience of hundreds of users of World of Warcraft, an online game that has its 10th anniversary in 2014 – celebrated with parties in Paris, Berlin, and London this November - and currently with more than seven million active players.
We look at game participation using the information about in-game achievements, which are now common practice across video-games: measures of how accomplished players are, so that they can have bragging rights with their friends.
The evolution of participation in the expansion “The Wrath of the Lich King” is displayed in the graph and shows clear insights about how players respond to product updates. Before each product update, users are forward-looking, anticipating new content that is very likely more exciting than what was launched before, which leads to waiting for the new content and drops in participation (red circles in the graph).
When the product update is introduced, excitement is at its maximum, with a spike in playing and enjoyment (green circles), especially if the content is challenging and exciting, as it was in the third big update, where players could kill the Lich King in one of the most exciting “boss fights” of the game.
Overall, during most of the first half of the expansion, content feels new, and the pleasure of becoming familiar with it, solving its puzzles, as well as progressing in the overall storyline increases participation (familiarity, progression). However, once the content becomes too familiar, boredom kicks in and as time progresses, users choose not to play as often.

























Once these patterns are well understood, the paper shows that the firm can optimise the timing of the innovation – delay innovation by two months because of bugs and you can lose ten percent of participation; anticipate it by a week with a flawless design and participation can grow by five percent. Given the social aspect of online games, these small percentages have long-term impact that can make or break the success of similar games.
A balance between the novelty and familiarity is essential to keep people involved with the product. It’s about managing the excitement and the satiation around an addicting storyline.

Start-up Innovation: Who Else Shares Your Partner’s Bed? 04-17

$
0
0
Start-up Innovation: Who Else Shares Your Partner’s Bed?



Strategic partnerships can be essential to a start-up’s innovation output. But are your partner’s other alliances affecting the value you get out of the relationship?


A new company’s performance is often judged by its innovation outputs. The greater the output the greater the competitive advantage it will have in the marketplace, and the more likely it is to attract the financing and resources necessary to survive. Gaining access to diverse sets of knowledge and recombining this knowledge in new and interesting ways are key drivers of innovation output, particularly for early stage start-ups. Forming strategic alliances is an important route to accessing these knowledge-based resources. 
But a strong partner for your company is often a good fit for your company’s competitors as well. It’s generally the case that when it comes to young businesses the more like-minded companies there are working in close proximity with one another, the more innovative and productive they are likely to be, but what is often overlooked is the competition that arises when these companies compete for the attention and resources of a shared alliance partner.

Although the value your company might achieve when it comes to a future M&A or IPO might be higher with a more connected alliance partner, when it comes to relationships where deeper, ongoing interactions are necessary – like R&D alliances – the “non-scale free” nature of your partners’ resources could mean that you end up fighting for your partner’s time and attention.
When it comes to the knowledge production process, valuable innovation outcomes are more likely to arise when your partner can devote a larger share of mind to your relationship.
My recent study Alliance Portfolios and Resource Competition: How a Firm’s Partners’ Partners Influence the Benefits of Collaboration, looked at 281 fledgling venture capital-backed biotech companies and found that when a start-up’s R&D alliance partners had a higher share of R&D alliances in their portfolios, together with a greater overlap in the R&D function between their own relationship and that of their partners’ partners, then access to shared resources was diminished, innovation output was lower, and the benefits for the start-up of collaborating with the partner were thus significantly reduced.
But not all alliances will have this effect. The study also found that when start-up firms in the sample could draw on a larger pre-existing knowledge base, thereby capturing a higher share of the knowledge spillovers that occurred through ongoing interactions between themselves and their partners, the downsides of any R&D alliance overlap were diminished. In addition, a higher level of functional overlap between a firm’s alliances and those of its partners were found to be beneficial in boosting the firm’s legitimacy and enhancing its value when it became time to exit, particularly when the alliances in question were marketing-oriented.
It’s a complicated scenario, but one that highlights the importance of knowing who else shares your partner’s ‘bed’, what their relationship involves, what resources they need, and how much attention they are receiving from the alliance.  It also highlights the need for companies to ensure that they are more attractive to the shared partner. If resources are constrained, then it is the start-up with the larger knowledge base and the mechanisms to codify incoming knowledge and put it to productive use that will be more likely to attract the shared partner’s attention and resources.
Strategic partnerships are essential to innovation, but scrutinising a potential partner’s resource portfolio is not enough. The nature of those resources is conditioned by the characteristics of the potential partner’s other relationships.  It’s important to think about how they have configured these relationships and the number of other partners who will be competing for their non-scale free resources.
Before entering into alliances start-ups should identify what their objectives for the relationship are, what resources and space will be allocated to them, and how much time and attention the strategic partner is able and willing to give. Not taking these issues into account can affect the future innovation prospects of your company.

Preserving Innovation Flair 04-17

$
0
0

Preserving Innovation Flair,




Budding start-ups often aim at the big prize of either going public or getting acquired, but both avenues can hurt innovation. What’s the best path to growth while maintaining your firm’s creative flair?
Facebook’s bumpy first year as a public company recently sparked debate about the creative benefits of private ownership. The social giant drew heavy criticism for what some felt was an awkward adaptation to the increasing importance of mobile, as well as for a series of copyright and privacy controversies. Only in recent weeks did Facebook’s share price top its IPO price.
Before its inevitable listing, the company is said to have worried about the effects of public scrutiny on its innovative potential. Now it wrestles with the beast that is the public marketplace and is grilled on new projects and tweaks to its model at every turn.
Meanwhile, Dell’s innovation slump and its founder’s buyout proposal are due to the pressures the company has felt at the hands of a demanding public market.
When entrepreneurs ask themselves whether they should take their startups public or sell to the highest bidder, they are in fact putting their firm’s innovative potential in question, according to a new paper by INSEAD professor Vikas A. Aggarwal and Wharton professor David H. Hsu.
In Entrepreneurial Exits and Innovation, the first paper to address how entrepreneurs should evaluate the alternative liquidity paths available to them, Aggarwal and Hsu find that going public or being acquired does in fact influence a firm’s innovation output.
By measuring the number of patents filed by venture-backed biotechnology firms that were founded between 1980 and 2000, along with the associated citations to these patents, the authors found three different innovation consequences for firms transitioning from being start-ups to being public or acquired firms.
The IPO effect
According to the study going public caused innovation quality to suffer the most.
Aggarwal and Hsu’s research determines that this is mostly due to information disclosure. As public companies must disclose their inventions as well as their results, managers may opt to back safer projects in order to produce results in the short term.
“What happens in the case of private companies is that you’re able to operate under the radar screen, and that allows you select projects that may have a higher risk of failure. That then allows you to make investments where you’re not under the constant scrutiny of larger owners or the public market,” said Aggarwal in an interview with INSEAD Knowledge.
“The competitive aspect of disclosure can be quite important, particularly when you have to disclose what’s in your pipeline and who are you partnering with; this influences the types of projects you will select” he added.
This mechanism of information disclosure is accelerated under analyst scrutiny, as more information is uncovered and divulged to investors hungry to know about pipelines and future share prices.
“Analyst scrutiny and the number of products a firm has in its early stage pipeline are both metrics for which there is greater oversight and risk. When analysts are scrutinising a company that has a lot of early stage projects in the pipeline, those are the conditions under which we would expect the disclosure mechanism to be most salient” Aggarwal said.
Getting acquired
In a merger, the effects are also negative when the company in question is bought out by a publicly listed one. While acquired firms saw an increase in the quantity of their innovations, they experienced a decline in overall innovation quality. This has to do with managers of the acquiring firm pushing for short-term and observable outcomes. Such a focus however may be detrimental to the long-term innovation potential of the organisation, according to Aggarwal.
It’s not all bad news for firms that get acquired though. In the study Aggarwal and Hsu found that companies being bought by a private entity rather than a public one see an increase in innovation quality. This is because private acquirers maintain more information confidentiality relative to their public counterparts. Lower technology overlap between the two firms also helps insulate the acquired firm, protecting its ability to innovate.
The Silver (Lake) Lining
“This has important implications for private equity ownership,” says Aggarwal. “In particular, it says a lot about the value that private equity firms can create. Dell is a great example: One of the reasons they’ve been less innovative over the past decade or so is because they’ve been under constant public scrutiny. Part of the motivation behind the buyout is to spur innovation at all levels of the company.”
Of course, Aggarwal readily admits that it’s possible to go public without losing the capacity to innovate (think Amazon). “I think the question is: Would Amazon have been more innovative under a private ownership regime? We know that Facebook held off on its IPO for as long as possible, in part because it feared the consequences of being in the public eye and the possible innovation implications that would have; while we of course never know the counterfactual, what our research tells us is that private relative to public ownership for a given company is likely to spur innovation.”

First Quantum Music Composition Unveiled 04-17

$
0
0

First Quantum Music Composition Unveiled



Physicists have mapped out how to create quantum music, an experience that will be profoundly different for every member of the audience, they say

One of the features of 20th century art is its increasing level of abstraction from cubism and surrealism in the early years to abstract expressionism and mathematical photography later. So an interesting question is what further abstractions can we look forward to in the 21th century?

Today we get an answer thanks to the work of Karl Svozil, a theoretical physicist at the University of Technology in Vienna and his pal Volkmar Putz. These guys have mapped out a way of representing music using the strange features of quantum theory. The resulting art is the quantum equivalent of music and demonstrates many of the bizarre properties of the quantum world.


Svozil and Putz begin by discussing just how it might be possible to represent a note or octave of notes in quantum form and by developing the mathematical tools for handling quantum music.

They begin by thinking of the seven notes in a quantum octave as independent events whose probabilities add up to one. In this scenario, quantum music can be represented by a mathematical structure known as a seven-dimensional Hilbert space.

A pure quantum musical state would then be made up of a linear combination of the seven notes with a specific probability associated with each. And a quantum melody would be the evolution of such a state over time.

An audience listening to such a melody would have a bizarre experience. In the classical world, every member of the audience hears the same sequence of notes. But when a quantum musical state is observed, it can collapse into any one of the notes that make it up. The note that is formed is entirely random but the probability that it occurs depends on the precise linear makeup of the state.

And since this process is random for all observer, the resulting note will not be the same for each member of the audience.

Svozil and Puz call this “quantum parallel musical rendition.” “A classical audience may perceive one and the same quantum musical composition very differently,” they say.

As an example they describe the properties of a quantum composition created using two notes: C and G. They show how in one case, a listener might perceive a note as a C in 64 percent of cases and as a G in 36 percent of cases.

They go on to show how a quantum melody of two notes leads to four possible outcomes: a C followed by a G, a G followed by a C, a C followed by a C, and G followed by a G. And they calculate the probability of a listener experiencing these during a given performance. “Thereby one single quantum composition can manifest itself during listening in very different ways,” say Svozil and Putz. This is the world’s first description of a quantum melody.

The researchers go on to discuss the strange quantum phenomenon of entanglement in the context of music. Entanglement is the deep connection between quantum objects that share the same existence even though they may be in different parts of the universe. So a measurement on one immediately influences the other, regardless of the distance between them.

Exactly what form this might take in the quantum musical world isn’t clear. But it opens the prospect of an audience listening to a quantum melody in one part of the universe influencing a quantum melody in another part.

Svozil and Putz also take a stab at developing a notation for quantum music (see picture above).

That takes musical composition to a new level of abstraction. “This offers possibilities of aleatorics in music far beyond the classical aleatoric methods of John Cage and his allies,” they say.

There is one obvious problem, however. Nobody knows how to create quantum music or how a human might be able to experience it. Svozil and Putz’s work is entirely theoretical.

That shouldn’t stop the authors or anybody else from performing a quantum musical composition. It ought to be straightforward to simulate the effect using an ordinary computer and a set of headphones. So instead of quantum music, we could experience a quantum music simulation.

That’s interesting work that has implications for other art forms too. How about quantum sculpture that changes for each observer or a quantum mobile that is entangled with another elsewhere in the universe.

One thing seems clear. Quantum art is coming, or at least the simulation of it.So don’t be surprised if you find a quantum melody playing at an auditorium near you someday soon.




Four Questions to Revolutionise Your Business Model 04-17

$
0
0

Four Questions to Revolutionise Your Business Model









Innovation is about more than groundbreaking technology. Rigorous, systematic questioning of risks in your business model can unleash opportunities for game changing performance improvements.
For a decade or more, listings website Craigslist seemed a rare exception to the Internet’s innovate-or-die rule. Very early in the new millennium, the San Francisco-based site nestled comfortably into the space once solely occupied by local newspapers’ classified sections, thanks to a singular, free-ad-based business model predicated on low operating costs. It has since expanded into more than 70 countries without making any major changes to either how it does business or its infamously drab design.

A recent spate of copyright-related legal actions, however, may reveal that Craigslist’s fountain of youth is at risk of running dry. In April 2013, a U.S. district court dismissed its claim that it held exclusive license to its listings. The year before, Craigslist added language to its terms of service asserting exclusive ownership of all user posts, prompting outcry from some consumer groups. (The language was deleted after just three weeks). At least to some, the company was starting to display a punitive side not in keeping with its stated mission to provide a public service.

It may be time for Craigslist to rethink its long-standing refusal to innovate, but this particular issue can’t be resolved with a raft of fancy new features. To satisfy its critics while keeping startups at bay, the company would have to turn a cold eye to what has been the unquestioned basis for its success: the vaunted Craigslist business model. In their new book The Risk-Driven Business Model: Four Questions That Will Define Your Company, Karan Girotra, INSEAD Professor of Technology and Operations Management, and Serguei Netessine, INSEAD Timken Chaired Professor of Global Technology and Innovation, make a forceful case for this sort of business model innovation (or “BMI”) and map out how Craigslist – or any company in danger of dulling its competitive edge – could use it to their advantage.

Four W’s to Manage Risk

The “four questions” of the book’s title -- Who, What, When, and Why – are hardly unique to the business world, but according to the authors, few firms subject their business model to such basic scrutiny frequently enough. There’s no substitute, Girotra and Netessine say, for the fundamental questions, such as “What should we sell?” and “When should we introduce our new products?”

“I like to compare it to financial auditing, which every organisation does every year, many times,” Netessine said in an interview with INSEAD Knowledge. “Often, a public company will do it once a quarter. But then you ask the same company how often [it examines] its own business models, they’ll tell you, ‘Well, I don’t know. Twenty years ago? Thirty years ago?’”

When business models are allowed to gather dust, the authors contend, hidden risks accumulate that could unravel companies. Though these risks come in many varieties, the book concentrates on two main kinds: information risk and incentive alignment risk. As Girotra explains, information risk “arises out of not knowing something, for instance which colour of iPhone 5 will be popular.” Incentive alignment risk occurs when the best interests of stakeholders diverge. It was awareness of this risk, for example, that led entertainment rental company Blockbuster to change its contracts with movie studios in the 1990s so that stores could order more copies of the most valuable new releases. The result: a big boost in overall profits, and a new (if temporary) lease on life for Blockbuster.

Blockbuster’s more recent difficulties, the authors suggest, perhaps could have been avoided had the firm kept the “4W”s top of mind. “The 4Ws anchor our framework,” they write, “because they are the innovator’s focal point for reducing both of our characteristic types of risk. By changing models to address the effects of these risks, you can limit the inefficiencies they cause and thereby unlock new value.”

Model Innovators

Many companies have turned to BMI techniques in times of existential crisis, but Amazon stands out to Girotra and Netessine for prioritising proactive, rather than reactive, reinvention of its business model. The company underwent multiple shifts in its business model in its first 15 years, they say, taking it from a “sell all, carry few” system heavily dependent on book wholesalers and publishers to a major wholesaler in its own right with a far-flung, ever-expanding network of warehouses. “It is amazing, I think, how Amazon has kept far more discipline than almost any organisation you could think of,” Girotra said.

But BMI isn’t just for the big boys. To escape the deepening shadow of Amazon, for example, smaller online retail companies could use the 4Ws to carve out a niche for themselves. That was the case with Diapers.com, which launched in 2005 with a business model aimed squarely at new parents. “Diapers had one amazing thing going for them,” Netessine enthused. “Demand for them is extremely easy to predict…For the next two, three years, you know exactly how much your customers are going to buy, and that makes it very, very easy to manage at very, very low cost and much higher efficiency than, say, for Amazon.”

Small wonder, then, that when Amazon bought Diapers.com in 2010, it allowed the new acquisition to operate at a respectful distance from the parent brand. “Perhaps because they wanted to keep both business models separate, so that both strengths continue to be strengthened,” Girotra said.

Enter the “Insurgency”

Effective implementation of BMI comes in three phases, the authors said. “The first phase is generating ideas of what kind of innovations [a company] might be able to do. Next phase is selecting between these innovations. And the final phase is really refining and testing them out, seeing if they really work or not,” Girotra said.

When gathering ideas, the authors recommend including as much input as possible from throughout the organisation. But for efficiency’s sake, it may be best for top management to hand-pick a diverse team to spearhead the refinement and experimentation phases. Girotra explained, “You don’t really make war on the existing business model, you really have an insurgency of a few people sitting outside the traditional structure who start developing the model.”

But unlike insurgencies that overthrow governments, these would be focused on “evidence-based experiential evaluation” that “eliminates a lot of the ideology around the existing and the new,” the authors said. Ideological agnosticism as well as broad representation on the team will help soothe any sore spots within the organisation as thoroughgoing change commences.

Though it may appear risky to loosen attachments to business-as-usual, Netessine stressed that experimentation is a key driver of innovation, even when it yields short-term losses. “If you want to innovate, many of those innovations will fail. Many more will fail than succeed. The important thing is to keep the process running.”

What's The New Quantum Renaissance? 04-18

$
0
0

What's The New Quantum Renaissance?





"Perhaps the quantum computer will change our everyday lives in this century in the same radical way as the classical computer did in the last century": these were the words of the Nobel committee upon awarding Serge Haroche and David Wineland the Nobel Prize for Physics for their work on Quantum Systems. Quantum computers can model complex molecules which can contribute to improvements in health and medicine through quantum chemistry. They are also capable of modelling complex materials which can impact energy efficiency and storage through room temperature superconductivity, as well as solving complex mathematical problems which will benefit safety, security and simulation.

New Renaissance
We are at the beginning of a new renaissance that explores the quantum nature of our shared reality. This new age is fast forwarding us to an era of Quantum engines, devices and systems which are beginning to deliver nascent solutions in:
1. Q-bit mathematics 
2. Q-bit algorithms & simulation
3. Quantum clocks
4. Quantum sensors
5. Quantum precision Components
6. Quantum cryptography
7. Quantum telecommunications
8. Quantum computing
9. Quantum healthcare
10.Quantum energy devices
The most significant innovations and inventions of our time are increasingly likely to be manifest at Quantum levels. Multiple paradigm-changing Quantum Technologies provide significant solutions to current global challenges in many critical areas of human endeavour and may also present outstanding opportunities for proactive, brave and technologically savvy inventors, innovators and investors.
The Quantum Age Begins
The dawn of the Quantum Age -- full of promising new Quantum Technologies -- demonstrates that not only is Quantum Mechanics relevant but every literate person can appreciate its profound beauty at many subtle levels. It is estimated that about 30 percent of the US Gross Domestic Product (GDP) already stems from inventions based on quantum physics: from lasers through to microprocessors and mobile phones. 
The discovery that subtle effects of Quantum Mechanics allow fundamentally new modes of information processing is outmoding the classical theories of computation, information, cryptography and telephony which are now being superseded by their Quantum equivalents. Quantum Technologies are essentially about the coherent control of individual photons and atoms and explore both the theory and the practical possibilities of inventing and constructing Quantum Mechanisms and Quantum Devices spanning the 3Cs: Computing, Cryptography and Communications.
Quantum Entanglement & Quantum Coherence
Quantum Entanglement occurs when two entities or systems appear to us to be separate but through Quantum Coherence act as one system, with states being able to be transferred wholesale from one entity to the other but without a known signal being transferred. Quantum Entanglement is at the heart of our understanding how significant events across the universe operate at the macro- and micro- level in synchronicity despite considerable distance between them. Quantum Entanglement suggests that information is exchanged instantaneously between Quantum Entangled particles regardless of the distance between them.
Fascinating Action at a Distance
In Quantum Mechanics, non-locality refers to "action at a distance" arising from measurement correlations on Quantum Entangled states. The dividing line between the micro world of Quantum Processes and the macro world of classical physics is now fading faster than ever before. Evidence is quickly mounting of the use in nature of Quantum properties and processes including Quantum Entanglement. Recent science has shown that Quantum Coherence and Entanglement provide the only viable explanation for a host of mysteries in nature: how photosynthesis in plants works, how birds migrate, how millions of cells co-ordinate hundreds of thousands of activities simultaneously without significant errors, and more.
Instantaneous Communication
It did take a long time to prove that Quantum Entanglement truly existed. It wasn’t until the 1980s that it was clearly demonstrated. In 1982, at the University of Paris, a research team led by physicist Alain Aspect performed what may turn out to be one of the most important experiments of the 20th century. Aspect and his team discovered that under certain circumstances subatomic particles such as electrons are able to instantaneously communicate with each other regardless of the distance separating them.
Holographic Universe
Quantum Coherence and Quantum Entanglement phenomena have inspired some physicists to offer ever more radical explanations including that of the holographic universe! The implications of a holographic universe are truly mind boggling... Aspect’s findings imply that objective reality does not exist, that despite its apparent solidity the universe is at heart a phantasm, a gigantic and splendidly detailed hologram. To understand why a number of physicists including David Bohm made this startling assertion, one must first understand a little about holograms.
Hologram
A hologram is a three-dimensional photograph made with the aid of a laser. To make a hologram, the object to be photographed is first bathed in the light of a laser beam. Then a second laser beam is bounced off the reflected light of the first and the resulting interference pattern -- the area where the two laser beams superimpose -- is captured on film. When the film is developed, it looks like a meaningless swirl of light and dark lines. But as soon as the developed film is illuminated by another laser beam, a three-dimensional image of the original object appears!
Fractals in Nature and Mathematics
The three-dimensionality of such images is not the only remarkable characteristic of holograms. If a hologram of a rose is cut in half and then illuminated by a laser, each half is still found to contain the entire image of the rose. Indeed, even if the halves are divided again, each snippet of film is always found to contain a smaller but intact version of the original image. Unlike normal photographs, every part of a hologram contains all the information possessed by the whole! This is exactly like fractals in nature and mathematics.
Holistic in Every Part
The "whole in every part" nature of a hologram provides us with an entirely new way of understanding organisation and order. For most of its history, Western science has laboured under the bias that the best way to understand a physical phenomenon, whether a frog or an atom or a national economy, is to dissect it and to study its respective components. A hologram teaches us that some things in the universe may be understood only as integrated holistic systems. If we try to take apart something constructed holographically, we will not get the pieces of which it is made. We will only get smaller wholes or less evolved, less detailed, incomplete miniatures of the whole picture.
Unity Consciousness: Extensions of the Same Source
This insight suggested to some scientists, including David Bohm, another way of understanding Aspect’s discovery. Bohm believed the reason subatomic particles are able to remain in contact with one another regardless of the distance separating them is not because they are sending some sort of mysterious signal back and forth, but because their separateness is, in fact, an illusion. Bohm suggested that at some deeper level of reality such particles are not individual entities, but are actually system components of the same fundamental something!
Investing in Future Technologies
The same principles of Quantum Entanglement, Resonance and Coherence apply to other fields, such as telecommunications, computers, and energy. Imagine communication devices that need no cables or even a wireless infrastructure. Imagine information being able to be transported magically over distances in a holistic state-dependent way instead of bit by bit or in packets.
Highly Secure Communications
There are some real and amazing applications of Quantum Entanglement in the security world. It can be used to produce unbreakable encryption. If we send each half of a set of entangled pairs to either end of a communications link, then the randomly generated but linked properties can be used as a key to encrypt information. If anyone intercepts the information it will break the entanglement, and the communication can be stopped before the eavesdropper picks up any data.
Infinity Manifest
The Roman philosopher Cicero observed more than two thousand years ago, "Everything is alive; everything is interconnected!" or "Omnia vivunt, omnia inter se conexa!" We are beginning to see the entire universe as a holographically interlinked network of energy and information, organically whole, undergoing rapid evolution. The "point" of the Singularity is reached essentially when all of the scientific and technological innovation trends appear to go out of control at the human level, ie, they have moved beyond our event horizon, and we can no longer follow along any previous linear logic or understanding to comprehend their combined effects. That technological change is instantaneous and omnipresent and defined by the Quantum Age. As we initiate, establish and activate the new "International Quantum: Exchange for Innovation Club" or "IQ:EI Club" within the aegis of Quantum Innovation Labs (QiLabs.net) we aim to be at the forefront of nurturing Quantum Technology ideas, inventions and innovations.

Quantum Code Breaker, How prepared is the IT Industry?? Quantum Cryptography,How close we are to it.?? 04-18

$
0
0
Quantum Code Breaker, How prepared is the IT Industry?? Quantum Cryptography, How close we are to it.???

From China to USA and from Russia to Europe, the race is on to construct the first Quantum Code Breaker, as the winner will hold the key to the entire Internet. From trans-national multibillion-dollar financial transactions to top-secret government and military communications, all would be vulnerable to the secret-code-breaking ability of the Quantum Computer that can run Shor's Quantum Factoring alogrithm. Those Quantum computers that can implement the new mathematics could quickly break our most sophisticated encryption codes protecting the internet based secure information, banking and payment transactions.
Given the powerful governments' development of quantum information science, the race to build the world’s first Quantum Computer for universal code-breaking continues to get red hot. What do the major governments seek? The governments' quest is to build a Quantum Computer capable of solving complex mathematical problems and hacking the public-key encryption codes used to secure the Internet. A universal 21st century Bletchley Park solution, if one imagines, for the new Enigma-type encryption of the internet. This refers to Shor’s quantum factoring algorithm, which can be utilised to unveil the encrypted communications of the entire Internet if a Quantum Computer is built to run the algorithm.
Most of our personal data is protected by complex encryption systems such as the widely used RSA algorithm, but these systems may have to change owing to an unexpected threat from quantum physics. Chaoyang Lu at the University of Science and Technology of China in Hefei and co-workers have already developed and demonstrated a photonic Quantum Computer that can quickly crack the RSA code -- a task that would take hundreds of years on current supercomputers.
In theory, the huge prime number 'keys' hidden by RSA can be found using a routine called Shor's algorithm. However, Shor's algorithm requires several calculations to be performed at the same time, which is only possible with Quantum Computers that use Quantum Bits (QuBits), which can be in a superposition of multiple logical states (an entangled state). Schrödinger’s cat and the notion of Quantum Entanglement is at the heart of it all. Quantum Entanglement has to be viewed in the historical context of Einstein’s 30-year battle with the physics community over the true meaning of Quantum Theory.
At more or less the same time as in China, an almost identical experiment was performed independently at the University of Queensland in Australia, implying that the technique is robust. By learning to manipulate more Qubits, researchers could eventually unlock larger numbers and explore an entirely new realm of mathematics. Philosophically, what are the ramifications of quantum technologies? Other key products and applications include: Quantum physics simulators, synchronised clocks, quantum search engines, quantum sensors and imaging devices.
What is the remedy to the threat posed by the Quantum code breaker? Quantum cryptography, which is unbreakable even by the Quantum Computer! 

Data science demands elastic infrastructure 04-19

$
0
0

Data science demands elastic infrastructure




Those companies that try to run big data projects in data centers may be setting themselves up for failure. Matt Asay explains. 

As companies struggle to make sense of their increasingly big data, they're laboring to figure out the morass of technologies necessary to become successful. However, many will remain stymied, because they keep trying to fit a necessarily fluid process of asking questions of one's data with outmoded, rigid data infrastructure.
Or as Amazon Web Services (AWS) data science chief Matt Wood tells it, they need the cloud.
While the cloud isn't a panacea, its elasticity may well prove to be the essential ingredient to big data success.

How much cloud do I need?

The problem with trying to run big data projects within a data center revolves around rigidity. As Matt Wood told me in a recent interview, this problem "is not so much about absolute scale of data but rather relative scale of data."
In other words, as a company's data volume takes a step function up or down, enterprise infrastructure can't keep up. In his words, "Customers will tool for the scale they're currently experiencing," which is great... until it's not.
In a separate conversation, he elaborates:
"Those that go out and buy expensive infrastructure find that the problem scope and domain shift really quickly. By the time they get around to answering the original question, the business has moved on. You need an environment that is flexible and allows you to quickly respond to changing big data requirements. Your resource mix is continually evolving--if you buy infrastructure, it's almost immediately irrelevant to your business because it's frozen in time. It's solving a problem you may not have or care about any more."
Success in big data depends upon iteration, upon experimentation as you try to figure out the right questions to ask and the best way to answer them. This is hard when dealing with a calcified infrastructure.

A eulogy for the data center?

Of course, it's not quite so simple as "all cloud, all the time."
Data, it would seem, has to obey fundamental laws of gravity, as Basho CTO Dave McCrory told TechRepublic in an interview:
"Big data workloads will live in large data centers where they are most advantaged. Why will they live in specific places? Because data attracts data.
"If I already have a large quantity of data in a specific cloud, I'm going to be inclined to store additional quantities of large data in the same place. As I do this and add workloads that interact with this data, more data will be created."
Over time, enterprises will look to the public cloud for all the reasons Wood describes, but legacy data is unlikely to make the migration. There's simply no reason to try to house old data in new infrastructure. Not most of the time.
But some companies will find that they're more comfortable with existing data centers and will eschew the cloud. I'm not talking about hide-bound enterprise curmudgeons that shout "Phooey!" every time AWS is mentioned, either. No, sometimes the most data center-centric of companies will be the innovators like Etsy.
As Etsy CTO Kellan Elliott-McCrea informed TechRepublic, once Etsy had "gained confidence" in its ability to manage its Hadoop clusters (and other technology), they brought them in-house, netting a 10X increase in utilization and "very real cost savings."
Nor is Etsy alone. Other new-school web companies like Twitter have opted to run their own data centers, finding that this gives them greater control over their data.

You're no Twitter

As highly as you may estimate your abilities, the reality is that you're probably not an Etsy, Twitter, or Google. As painful as it is to say it, most of us are average. By definition.
This is what Microsoft's great genius was: rather than cater to the Übermensch of IT, Microsoft lowered the bar to becoming productive as a system administrator, developer, etc. In the process, Microsoft banked billions in profits, helping make a good sysadmin better or a decent developer good.
Regardless, all enterprises need to establish infrastructure that helps them to iterate. Some, like Etsy, may have figured out how to do this in their data centers--but for most of us, most of the time, Wood's advice rings true: "You need an environment that is flexible and allows you to quickly respond to changing big data requirements."
In other words, odds are that you're going to need the cloud.

The Science of Resilience 03-29

$
0
0

The Science of Resilience


WHY SOME CHILDREN CAN THRIVE DESPITE ADVERSITY


When confronted with the fallout of childhood trauma, why do some children adapt and overcome, while others bear lifelong scars that flatten their potential? A growing body of evidence points to one common answer: Every child who winds up doing well has had at least one stable and committed rela­tionship with a supportive adult.


The power of that one strong adult relationship is a key ingredient in resilience — a positive, adaptive response in the face of significant adversity — according to a new reportfrom the National Scientific Council on the Developing Child, a multidisciplinary collaboration chaired by Harvard’sJack Shonkoff. Understanding the centrality of that relationship, as well as other emerging findings about the science of resilience, gives policymakers a key lever to assess whether current programs designed to help disadvantaged kids are working.


“Resilience depends on supportive, responsive relationships and mastering a set of capabilities that can help us respond and adapt to adversity in healthy ways,” says Shonkoff, director of the Center on the Developing Child at Harvard. “It’s those capacities and relationships that can turn toxic stress into tolerable stress.”


As a growing body of research is showing, the developing brain relies upon the consistent “serve and return” interactions that happen between a young child and a primary caregiver, the report says. When these interactions occur regularly, they provide the scaffolding that helps build “key capacities — such as the ability to plan, monitor, and regulate be­havior, and adapt to changing circumstances — that enable children to respond to adversity and to thrive,” the report continues. The developing brain is buffered by this feedback loop between biology and environment.


But in the absence of these responsive relationships, the brain’s architecture doesn’t develop optimally. The body perceives the absence as a threat and activates a stress response that — when prolonged — leads to physiological changes that affect the brain and overall systems of physical and mental health. The stress becomes toxic, making it more difficult for children to adapt or rebound.


The experiences of the subset of children who overcome adversity and end up with unexpectedly positive life outcomes are helping to fuel a new understanding of the nature of resilience — and what can be done to build it.


Here’s what the science of resilience is telling us, according to the council’s report:


  • Resilience is born from the interplay between internal disposition and external experience. It derives from supportive relationships, adaptive capacities, and positive experiences.
  • We can see and measure resilience in terms of how kids’ brains, immune systems, and genes all respond to stressful experiences.
  • There is a common set of characteristics that predispose children to positive outcomes in the face of ad­versity:
    • The availability of at least one stable, caring, and supportive relationship between a child and an adult caregiver.
    • A sense of mastery over life circumstances.
    • Strong executive func­tion and self-regulation skills.
    • The supportive context of affirming faith or cultural traditions.
  • Learning to cope with manageable threats to our physical and social well-being is critical for the development of resilience.
  • Some children demonstrate greater sensitivity to both negative and positive experiences.
  • Resilience can be situation-specific.
  • Positive and negative experiences over time continue to influence a child’s mental and physical development. Resilience can be built; it’s not an innate trait or a resource that can be used up.
  • People’s response to stressful experi­ences varies dramatically, but extreme adversity nearly always generates serious problems that require treatment.
    Viewing all 1643 articles
    Browse latest View live




    Latest Images