How does play affect the cognitive development of children around the world? Doctoral candidate Lynneth Solis is looking to find out. Play is an integral part of a child’s development according to traditional research mostly conducted in Western societies, but what role does play serve and what does it look like for children in indigenous communities? Ed.D. candidate Lynneth Solis, Ed.M.’10, is determined to better answer that question. Solis’ research focuses on children’s cognitive development, specifically how young children play with each other and with objects to understand and build theories about the world around them, and how this is shaped by their cultural context. After completing her master's in the Mind, Brain, and Education Program in 2010, Solis spent the following year as a research assistant with Project Zero working alongside Principal Research Scientist in Education Tina Grotzer, looking at children’s understanding of complex causal patterns in science. It was through her work at PZ that Solis expanded her own research into children’s cognitive development and how they use objects to understand complex scientific phenomena such as friction, balance, and other physical concepts. This in turn led her to include looking at play in crosscultural settings. In 2015, Solis — the recipient of a Frederick Sheldon Traveling Fellowship through the Committee on General Scholarships at Harvard — spent a year working with and observing children in the Sierra Nevada de Santa Marta in Colombia. (The Kogi settlement in which she worked is pictured below.) What she found, contrary to prior research, was that indigenous children were engaged in complex forms of play that hadn’t been documented before. “In the past, researchers reported that indigenous children didn’t play in complex ways, but I found them pretending and involved in object play and construction similar to the way children in the West play,” Solis says. “But I also found ways children in these communities would spend time together socializing, enjoying each other’s company without outwardly appearing very playful at first glance, but involved in unstructured and positive experiences that we could call play.” After interviewing parents, Solis discovered even more variation from the narrative older research told. While some parents felt play was not part of their culture, others believed play helped prepare children for the future inside and outside their community. “They saw play as a way for children to feel more confident speaking, expressing themselves, and interacting with adults and other children within and outside their culture,” says Solis “They also expressed that they believed play could help their children to explore their environment, learn, and develop creativity.” Grotzer says that Solis’ important ethnographic work is helping the field of children’s cognitive development better understand the diversity of play and its role in cultures outside the West, and understanding what parents and community members believe about play is crucial for designing educational interventions. “Lynneth is able to set aside her own cultural assumptions to really ‘see’ what is going on and to interpret it through the eyes of the cultures that she is studying,” Grotzer says. “The stories of her work convey the extraordinary ability that Lynneth has to ‘become one’ with the children such that they have included her in their hidden worlds and have given us a rare window into their childhoods.” It was critical to her research in Colombia, Solis says, to be trusted by children so that they would invite her into their play spaces. “When they said let's go to the river to swim, I went to the river, and when they went on crab hunts at the beach at night, I ran along the ocean with them,” she says. “This meant refraining from acting like an authority figure, so I became a curious companion who happened to always be writing notes and documenting their activities.” Solis has also been working closely with the Lego Foundation, in partnership with the Center on the Developing Child at Harvard University, to better understand the science of play to support learning, which she says is an important next step for the field. “There’s a call for systematic research on play,” Solis says. “Up to this point there’s been a lot of correlational research, but now methodological advances get us closer to understanding the developmental mechanisms involved in play from a biological and neurological perspective. I feel like I’m on the beginning journey of that and it feels exciting.” Solis’ work has not gone unrecognized. During her time at HGSE as a master's candidate she was named an Intellectual Contribution Award winner, and last year she was a Julius B. Richmond fellow at the Center on the Developing Child and AERA Minority Dissertation Fellow. While she is set to graduate next year, Solis plans on continuing her research to expand understanding of child development. “I’m very interested in indigenous communities here in the States and abroad and expanding the stories we tell in child development,” Solis says. “It’s important to tell stories of diverse populations to really understand child development, and the more varied stories we tell, the more we understand.” View at the original source
Researchers at the University of Pittsburgh have uncovered the mechanism by which neurons keep up with the demands of repeatedly sending signals to other neurons. The new findings, made in fruit flies and mice, challenge the existing dogma about how neurons that release the chemical signal dopamine communicate, and may have important implications for many dopamine-related diseases, including schizophrenia, Parkinson’s disease and addiction.
The research conducted at Pitt and Columbia University was published online today in the journal Neuron.
Neurons communicate with one another by releasing chemicals called neurotransmitters, such as dopamine and glutamate, into the small space between two neurons that is known as a synapse. Inside neurons, neurotransmitters awaiting release are housed in small sacs called synaptic vesicles.
“Our findings demonstrate, for the first time, that neurons can change how much dopamine they release as a function of their overall activity. When this mechanism doesn’t work properly, it could lead to profound effects on health,” explained the study’s senior author Zachary Freyberg, M.D., Ph.D., who recently joined Pitt as an assistant professor of psychiatry and cell biology. Freyberg initiated the research while at Columbia University.
When the researchers triggered the dopamine neurons to fire, the neurons’ vesicles began to release dopamine as expected. But then the team noticed something surprising: additional content was loaded into the vesicles before they had the opportunity to empty. Subsequent experiments showed that this activity-induced vesicle loading was due to an increase in acidity levels inside the vesicles.
“Our findings were completely unexpected,” said Freyberg. “They contradict the existing dogma that a finite amount of chemical signal is loaded into a vesicle at any given time, and that vesicle acidity is fixed.”
The team then demonstrated that the increase in acidity was driven by a transport channel in the cell’s surface, which allowed an influx of negatively charged glutamate ions to enter the neuron, thus increasing its acidity. Genetically removing the transporter in fruit flies and mice made the animals less responsive to amphetamine, a drug that exerts its effect by stimulating dopamine release from neurons.
“In this case, glutamate is not acting as a neurotransmitter. Instead it is functioning primarily as a source of negative charge, which is being used by these vesicles in a really clever way to manipulate vesicle acidity and therefore change their dopamine content,” Freyberg said. “This calls into question the whole textbook model of vesicles as having fixed amounts of single neurotransmitters. It appears that these vesicles contain both dopamine and glutamate, and dynamically modify their content to match the conditions of the cell as needed.”
In the future, the team plans to look more closely at how increases in vesicle acidification affect health. A number of brain diseases are characterized by abnormal dopamine neuron signaling and altered levels of the neurotransmitter.
“Since we have demonstrated that the balance between glutamate and dopamine is important for controlling the amount of dopamine that a neuron releases, it stands to reason that an imbalance between the two neurotransmitters could be contributing to symptoms in these diseases,” said Freyberg.
This article has been republished from materials provided by UPMC. Note: material may have been edited for length and content. For further information, please contact the cited source.
As he did after the birth of his first child in 2015, Facebook CEO Mark Zuckerberg is taking two months’ paternity leave from the company when his second daughter arrives. This time, Zuckerberg will break up the leave, spending one month at home with his children and wife Priscilla Chan right after the baby’s birth and taking the rest of the leave in December.
Zuckerberg is using only half of the four months of paid parental leave that Facebook allots male and female employees. It’s still far more time than the typical father takes off work for the birth of a child in the US, where only 15% of companies in a national survey last year offered paid paternity leave. The lack of paid leave for men hurts parents who want to share the experience of caring for their babies, and contributes to the persistent lag in women’s wages and workforce participation. Fully paid paternity leave is key to breaking a vicious cycle in which employers pay women less and bypass them for promotions in anticipation that they’ll take time off to raise children, making the lower-earning female partner the natural choice to take unpaid or partially paid leave that’s ostensibly offered to both parents.
As Quartz’s Gwynn Guilford pointed out in a 2014 analysis of parental leave policies in Sweden and Japan, the more parental leave men take, the sooner women go back to work. A 2010 study in Sweden found that a woman’s future earnings rose 7% for every month her partner took under the country’s paid parental leave system, which incentivizes both parents to take time off. Sweden has one of the world’s highest rates of working women, and a nearly non-existent wage gap.
But it’s not enough for companies to offer paternity leave. Men have to actually take it, and this is where Zuckerberg’s decision to make his family plans public is significant. In a 2014 survey by the Working Mother Research Institute, men reported a significant gap between the availability of family-friendly, flexible working policies and the degree to which they were encouraged to take them. Those who did feel supported by their employers reported more satisfaction with the company, their careers, and their home lives.
“At Facebook, we offer four months of maternity and paternity leave because studies show that when working parents take time to be with their newborns, it’s good for the entire family,” Zuckerberg wrote in a Facebook post. “And I’m pretty sure the office will still be standing when I get back.” Meanwhile, there’s no better way to encourage employee behavior than to lead by example.
What would happen if you were to arrive to your classroom, unplug the devices, turn off the projector, and step away from the PowerPoint slides … just for the day?
What would you and your students do in class?
This was the challenge I presented to 100 faculty members who attended my session at the Teaching Professor Conference in St. Louis this past June. The title of the session was, “Using ‘Unplugged’ Flipped Learning Activities to Engage Students.” Our mission was to get “back to the basics” and share strategies to engage students without using technology. Why Use “Unplugged” Strategies the Flipped Classroom? Most of the conversations about the flipped classroom include discussions about technological tools. What video recording tool should I use? What tools are best for producing a podcast? What quizzing tool should I use to assess the pre-class work? What types of clickers should I use in class to assess learning? With all of this focus on technology, why would we want to consider flipping a class without it? Here are three reasons:
1. To focus on the process. For many faculty, the “flip” means something more than how technology is used in and out of the classroom. In my work, for example, the FLIP is when you “Focus on your Learners by Involving them in the Process.” When you FLIP, you intentionally invert the design a learning environment so students engage in activities, apply concepts, and focus on higher level learning outcomes during class time.
This definition encourages us to think strategically about the learning experiences we are designing with our students so they can achieve the learning outcomes. The focus is not the technology. It’s the process. It’s the process of involving our students in applying and analyzing course content, making decisions, critiquing a topic, or evaluating a data set. It’s the process of creating something together to demonstrate understanding or to express ideas. Sometimes technology can help with this process, but sometimes it can become a distraction which could hinder the process.
2. To improve learning and retention. Scholars continue to analyze the pros and cons of technology in the classroom and its impact on student learning, retention, and engagement. For almost every study published on the advantages of technology, you can find another study on the disadvantages. Ultimately, the learning outcomes should inform how (or if) technology is used in the classroom.
It is interesting to consider how the findings of this recent study on taking notes by hand versus on a laptop has been shown to increase conceptual thinking (Mueller & Oppenheimer,2014). And, for those of us who use slides and videos in our flipped classrooms, it’s important to note the combination of images, text, videos, and our voice can be too overwhelming for some students, especially when they are introduced to new information. Using “unplugged” strategies in some of your lessons can reduce the cognitive load and help students remember what they’ve learned (Madda, 2015).
3. To enhance creativity and encourage real connections.
When I use unplugged strategies in my teaching, my students often say it’s “refreshing” to do something different. They often comment on how “tired” they are of slides and online discussion forums. When they disconnect from the devices, they tell me it helps them think of new ideas, and they appreciate the opportunity to connect with other students who are not distracted by a screen. Likewise, when I use unplugged strategies in faculty development workshops, faculty often say that they appreciate the opportunity to put away their phones and laptops so they can make real connections with their colleagues (and get away from feeling obligated to answer emails!). Flipping Faculty Development: Unplugged! Speaking of faculty development, in my conference session, I wanted to engage faculty in the process of using “unplugged” tools in class to engage and involve students. To demonstrate how it can be done, and with the goal of “practicing what we preach” when it comes to faculty development, I flipped the session and placed faculty in the center of the learning experience.
The faculty participated in activities using five different “unplugged” tools: sticky notes, index cards, dice, a deck of cards, and poster paper. Each group was asked to analyze a case study using each of the tools and then brainstorm different ways these tech-free tools can be used in the classroom to increase student engagement and improve learning. I told workshop participants I would share their work in a Faculty Focus article and on my blog. As promised, here are some of their ideas:
Goal: To encourage students to ask more questions during class time.
Unplugged Tool: Dice
Strategy: Faculty member rolls the dice and the number rolled is the number of questions the students in the class must ask before class is dismissed.
Goal: To encourage students to analyze and prioritize information.
Unplugged Tool: Index cards
Strategy: Give students a case study. Ask them to individually decide which piece of information in the case is most important and write that information on an index card. Put students in groups and ask them to discuss and prioritize the cards from most to least important. Integrate their ideas into a class discussion.
Goal: To help students put information in an order.
Unplugged Tool: Sticky notes
Strategy: Give students a stack of index cards. Ask them to write each step of a procedure or process on the card and place the cards in the correct order from first step to last. Other groups can critique and change the order if needed. Use for class discussion.
For more unplugged teaching strategies created by the participants and to see the case studies, view my post on 25 Unplugged Strategies.
An important part of faculty development is to share ideas so we can all learn from each other. So, let’s keep the conversation going! What “unplugged” tools or strategies have you used in class to engage students and improve learning? View at the original source Restoring pedagogical sanity ... and I would encourage others to keep it going.
Shyam's take on The Process vs The Product... The ultimate aim of any product or service should be to create that MOJO effect for the customer. Even a feeling that says, "Why didn't someone come out with this earlier". But you can't do this without a process, and I am not referring to the processes required to create a product. The processes required to create a product are of course are an absolute inevitable. The confidence to create a MOJO effect stems from the quality of the product. However you need a process to create the MOJO effect, and there is no standard procedure about it. Every time you need to create a standard process to create new standards in customer engagement. You need a promotional process that can create a benchmark in the minds of the customers about quality, utility, and the cost effectiveness of the product and also the one that elevates the customer esteem for the organization and its other products. The process should make the product strong enough in customer perception, to enable other equally good but poorly promoted, products of the organization to piggy ride on the new esteem acquired through this product.
Now the article...
We work in an industry that glorifies the process of everything we build.
We are obsessed with roadmaps, methodologies, acronyms, and productivity tools. This shared language gives us a sense of belonging and makes us feel like we are in fact product people. Process is important in life. And we should seek to build high quality products with focus and efficiency. I would never argue against that.
Product Managers and Product Designers glorify our process for building products on Twitter, podcasts, and at conferences — but we should remember something important. Customers don’t actually care about how we build our products, or our process. Customers do not care about our Slack debates, our Jira tickets, or the compromises we make to our roadmap along the way.
Customers only care about how our products feel in their hands, and nothing else. That moment when customers first open our apps and escape their reality for a few minutes as they discover something new — that is the moment that matters if you build products. We need to focus less on glorifying our process for everything we ‘product people’ do and focus more on what actually matters.
What matters is that we build products that make customers smile and realize something new about themselves.
What matters is that we build products that make life more fulfilling for billions of people by giving them access to education and new ideas.
What matters is that we build products which make customers say: I honestly can’t imagine my life before I downloaded that app.
We Product Managers spend our days writing specs, filing Jira tickets, and editing roadmaps. Process. Process. Process.
What else could we have created instead of that perfect Gantt chart?
Are there easy wins in our production apps that we could have discovered instead of seeking the perfect process?
Can we think less about our perfect Product Manager workflows and think more about simplifying our products?
Our job is to delight customers through the projects we build. Process is important, but do not forget that customers judge us on one thing.
Initial results reveal more than 760 genetic dependencies across multiple cancers
Graphic: The Broad Institute of MIT and Harvard
In one of the largest efforts to build a comprehensive catalog of genetic vulnerabilities in cancer, researchers from the Broad Institute of MIT and Harvard and Dana-Farber Cancer Institute have identified more than 760 genes upon which multiple types of cancer cells are strongly dependent for their growth and survival.
Many of these “dependencies,” the researchers report today in the journal Cell, are specific to certain cancer types. However, about 10 percent of them are common across multiple cancers, suggesting that a relatively small number of therapies targeting these core dependencies might each hold promise for combating several tumors.
To generate these findings, the research team conducted genome-wide RNA interference (RNAi) screens on 501 cell lines representing more than 20 types of cancer, silencing more than 17,000 genes individually in each line to identify genetic dependencies unique to cancerous cells.
Cancer cells can harbor a broad variety of genetic errors, from small mutations to wholesale swaps of DNA between chromosomes. If an error shuts down a critical gene, a cancerous cell will compensate by adjusting other genes’ activity, frequently developing a dependence on such adaptations in order to persist.
Identifying these dependencies provides opportunities for scientists to gain deeper insight into cancer biology and determine new therapeutic targets.
“Much of what has been and continues to be done to characterize cancer has been based on genetics and sequencing. That’s given us the parts list,” said study co-senior author William Hahn, an institute member in the Broad Cancer Program, chief of the Division of Molecular and Cellular Oncology at Dana-Farber, and a leader in the Cancer Dependency Map initiative, a joint effort spanning the Broad Institute and Dana-Farber. “Mapping dependencies ascribes function to the parts and shows you how to reverse-engineer the processes that underlie cancer.”
RNAi silences genes using small pieces of RNA called small interfering RNAs (siRNAs). To run a genome-wide RNAi screen, researchers expose cells to pools of siRNAs and track the cells’ behavior. “The simplest thing one can do with perturbed cells is allow them to keep growing over time and see which ones thrive,” explained study co-senior author David Root, an institute scientist and director of the Genetic Perturbation Platform at the Broad. “If cells with a certain gene silenced disappear, for example, it means that gene is essential for proliferation.”
The data revealed striking patterns in cancer cells’ dependencies. Many dependencies were cancer-specific, in that silencing each affected only a subset of the cell lines. However, more than 90 percent of the cell lines had a strong dependency on at least one of a set of 76 genes, suggesting that many cancers rely on a relatively few genes and pathways.
Using a set of molecular features (e.g., mutations, gene copy numbers, expression patterns) from each cell line, the team also generated biomarker-based models that helped explain the biology behind 426 of the 769 dependencies. Most of those biomarkers fell into four broad categories:
Mutation(s) of a gene;
Loss of a copy or reduced expression of a gene;
Increased expression of a gene;
Reliance on a gene functionally or structurally related to another, lost gene (a.k.a., a paralog dependence).
Surprisingly, more than 80 percent of the dependencies with biomarkers were associated with changes (up or down) in a gene’s expression. Mutations, often used as the grounds for pursuing a gene as a drug target, accounted for merely 16 percent of biomarker-associated dependencies. Twenty percent of the dependencies the team discovered were associated with genes previously identified as potential drug targets.
“We can’t say we’ve found everything, but we can say that the genes we’re seeing fall into a relatively small number of bins, some of which are familiar, some less so,” Hahn said. “That initial taxonomy is a great starting point for building a full map.”
“Our results provide a starting point for therapeutic projects to decide where to focus their efforts,” said study co-first author Francisca Vazquez, a Cancer Dependency Map project leader. She added that while there was still much to do to validate the list, “It’s becoming increasingly easier to triangulate data and generate hypotheses as more genome-scale systematic data sets, like those from the Cancer Cell Line Encyclopedia, Genotype-Tissue Expression, and the Cancer Genome Atlas projects, become available.
“Bringing of all the data together will help us generate a truly comprehensive cancer dependency map.”
To eliminate false-positive results caused by seed effects — a phenomenon by which siRNAs inadvertently silence irrelevant genes — study co-first author Aviad Tsherniak led the development of a novel computational tool dubbed DEMETER.
“People sometimes take a dim view of RNAi because seed effects make the data so noisy,” said Tsherniak, leader of the Broad Cancer Program’s Data Science group. “DEMETER models gene knockdown and seed effects within the data, and computationally subtracts the seed effects. It cleans up the data and helps you find true dependencies.”
According to Hahn, the data argue that the time is ripe to pay more attention to the broader landscape of functional aspects of cancer, in addition to focusing on protein-coding gene mutations and variations.
“I think we’re close to the end of finding genes that are mutated or focally amplified in cancer,” he said. “To me, that’s a huge opportunity, because it means we have many heretofore untapped avenues for understanding cancer.”
If you think process improvement only works on the factory floor, think again
If you’ve attempted to apply process improvement techniques in your workplace, only to see it fail, you’re not alone. Attempts to drive more order and productivity in the office often use process-focused improvement techniques such as Lean or the Toyota Production System that were originally developed to improve the physical, highly repetitive work found in factories. Unfortunately, such interventions are often resisted and rarely produce significant gains.
“When we work with doctors, lawyers, finance professionals, engineers, and designers, we often hear statements like, ‘my work is different from what happens in a factory, you can’t standardize it,’” says Don Kieffer, Senior Lecturer in Operations Management at MIT and Managing Partner of ShiftGear Work Design. “Intellectual work is different. But contrary to the argument that process improvement ‘only works in the factory,’ my experience is that, when properly applied, the concepts and principles underpinning Toyota and Lean methods produce more powerful results and far more quickly in the office.”
While engineering, finance, or IT can’t be standardized in the same way as a production operation, Kieffer believes it can be understood and dramatically improved through Dynamic Work Design.“ShiftGear’s approach is to understand the work using principles of work design, propose tools and methods that fit that work, and engage the people doing the work to quickly discover the next best way, not impose it upon them from a different kind of work or organization.”
In their MIT Sloan Executive Education program, Implementing Improvement Strategies: Dynamic Work Design, Kieffer and ShiftGear Partners Nelson Reppening (an MIT Sloan Professor) and Sheila Dodge present practical tools and methods for sustainable improvement efforts of any scale, in any industry, and in any function. Their method has proven to work in businesses as diverse as oil and gas, DNA sequencing, and engineering, at the scale of discrete problems or enterprise-wide strategic efforts.
The four principles of Dynamic Work Design
More than 20 years of research, consulting, and collaboration has led Keiffer and his colleagues to four principles that, in their view, “raise the game” on how to design and improve work, no matter what type of work it may be:
Reconcile activity and intent: Get crystal clear on the vision, mission, and targets, and then optimize the set of activities needed to accomplish those targets. Work should be designed so every activity is tied together to meet the targets.
Connect the human chain through triggers and checks: Without pre-specified rules, people usually wait too long to ask for help, and leaders don’t check in frequently enough. Good work design incorporates triggers and checks that signal teams and managers to meet to resolve ambiguity and fix problems, often in real time.
Structured problem-solving and creativity: Structured problem-solving methods mitigate our inherent desire to jump to solutions and encourage us to do a proper analysis using both data and investigation.
Optimal Challenge: Systems don’t work if they have too little or too much work in them. Putting the right amount of stress into the work system helps surface problems while allowing enough time to resolve them.
Whether you have 50, 500, or 5,000 employees, these principles can help you connect all of them to the flow of work that meets your organization’s targets.
Reproduced from MIT Management Executive Education
The human brain has a region of cells responsible for linking sensory cues to actions and behaviors and cataloging the link as a memory. Cells that form these links have been deemed highly stable and fixed.
Now, the findings of a Harvard Medical School (HMS) study conducted in mice challenge that model, revealing that the neurons responsible for such tasks may be less stable, yet more flexible than previously believed.
The results, published Aug. 17 in the journal Cell, cast doubt on the traditional notion that memory formation involves hardwiring information into the brain in a fixed and highly stable pattern. The researchers say their results point to a critical plasticity in neuronal networks that ensures easier integration of new information. Such plasticity allows neuronal networks to more easily incorporate new learning, eliminating the need to form new links to separate neurons every time. Furthermore, the researchers said, once a memory is no longer needed, neurons can be more easily reassigned to other important tasks. “Our experiments point to far less stability in neurons that link sensory cues to action than we would have expected and suggest the presence of much more flexibility, and indeed a sort of neuronal efficiency,” said study senior author Chris Harvey, an assistant professor of neurobiology at HMS. “We believe this trade-off ensures the delicate balance between the ability to incorporate new information while preserving old memories.”
The Harvard Medical School study involved experiments with mice repeatedly running through a virtual maze over the course of a month. Analyzing images of brain activity in a brain region involved in navigational decision-making, the researchers noted that neurons did not stabilize into a pattern. Instead, the set of neurons forming the mice’s maze-running memories kept changing for the duration of the study. In fact, neurons kept switching roles in the memory pattern or left it altogether, only to be replaced by other neurons.
“Individual neurons tended to have streaks where they’d do the same thing for a few days, then switch,” Harvey said. “Over the course of weeks, we began to see shifts in the overall pattern of neurons.”
The experiments are part of the research team’s ongoing efforts to unravel the mysteries of memory formation and, specifically, how the brain captures external cues and behaviors to perform recurring tasks such as navigating a space using landmarks. Imagine a person driving a familiar route to the grocery store who sees the bank and turns right at that corner without even having to think about it consciously.
To mimic that process, mice in the study were trained to run down a virtual passage — a computer-generated maze displayed on large screens in front of a treadmill — and turn right if they were given a black cue or left if they were given a white cue. Researchers imaged hundreds of neurons in the part of the brain responsible for spatial decision-making as the mice were galloping down the virtual maze.
Once the navigational links were firmly established in the mice’s brains over the course of a few weeks, the researchers expected the activity of the neurons to look the same from day to day. During maze runs that occurred within 24 hours of each other that was, indeed, the case. Neurons that activated in response to the white cue could be distinguished from neurons that activated in response to the black cue. However, over the course of several weeks the line between cues in individual neurons blurred, and the memory pattern began to drift across neurons, the researchers observed. A neuron that had been associated with the black cue would lose its specialization and be replaced by another, or it might even become associated with the white cue. This came as a surprise to the researchers.
“We were so sure that the neurons would be doing the same thing every day that we designed the study expecting to use the stable pattern as a baseline,” said study first author Laura Driscoll, a graduate student in the Neurobiology Department. “After we realized the neurons were changing roles, we had to rethink parts of the study.”
The researchers tested how the pattern changed when they added shapes as a third cue while the mice were navigating the maze. After some reassignment of individual neurons as the mice learned the new cue, the researchers found very little change to the overall activity pattern. This finding supports the idea that neuronal networks that store memories stay flexible in order to incorporate new learning, the researchers say.
The researchers hypothesize that neuronal stability may differ across various brain regions, likely depending on how often the skill or memory they encode needs to be modified. For a task like navigation, which frequently requires the brain to incorporate new information, it would make sense that the neurons remain flexible, Harvey said. However, more instinctual physical responses, such as blinking, may be hardwired with little neuronal drift over time.
The results provide a fascinating early glimpse into the complexities of memory formation, Driscoll said. To elucidate the big picture of memory formation and storage across brain regions, researchers say they hope to study other areas of the brain involved with different types of decision-making and memories.
“I hope this research inspires people to think of memory as something that is not static,” Harvey said. “Memories are active and integrally connected to the process of learning.” View at the original source
Former prime minister of Australia, and is president of the Asia Society Policy Institute in New York
It would be wrong to assume China would stand by if peninsula fell into conflict. In his acclaimed book The Sleepwalkers, Christopher Clark wrote about how the great powers of 1914 stumbled into a pan-European war that not only destroyed much of the continent, but unleashed destructive forces that defined the global order for much of the following century.
Some of us fear that we are sleepwalking again, blindly unaware of the abyss that lies ahead. As a Chinese friend reminded me recently, war has its own logic. So too do crises. History teaches us they are both hard to stop once they start.
The greatest global flash point today is the Korean peninsula. Most analysts regard crisis and conflict over the North Korean nuclear programme as improbable. They are right. But the uncomfortable truth is that it is now becoming more possible.
There are three basic scenarios. First, China either talks the North Koreans out of their nuclear programme through politics and diplomacy. Or it forces them out of it through financial and economic sanctions, which deliver real policy change in Pyongyang. Second, the US launches a unilateral military attack on North Korean nuclear facilities to destroy or at least degrade the programme. Or third, the US adjusts to the reality of a North Korea armed with nuclear-tipped intercontinental ballistic missiles and seeks to impose some sort of regime to manage them.
From the perspective of the Trump White House, scenario one is no longer seen as viable. While some see this as US posturing, there is a calculation in Washington that the Chinese are playing for time with the Americans on North Korea, and do not have any intention of doing what is necessary to bring about a fundamental change in Pyongyang’s behaviour. There is also an emerging US view that Beijing will continue to give the impression of taking action against Pyongyang, in order to forestall any risk of unilateral US action until such time as the Americans simply have to accept North Korea as a bona fide nuclear weapons state.
On the second scenario, it would be wrong to assume the US has ruled out a unilateral strike. Whereas Japan and South Korea would oppose such action, this will not be decisive in determining any final US decision. The problem is, China believes the US is bluffing. Beijing cannot comprehend the US position, because it believes America could not afford to ignore South Korean opposition to a unilateral strike, given the probable retaliation against Seoul. China also sees it as inconceivable that the US would risk the shattering of its security alliances with Seoul and Tokyo by acting without their consent.
As for the US accepting North Korea as just another member of the nuclear club, this does not sit well at all in Washington. North Korea is not regarded as a normal state, nor has it exhibited any interest in developing a transparent nuclear doctrine. Furthermore, it has taken to issuing repeated bellicose threats against the US. The domestic backlash against allowing North Korea to acquire its long-sought after ICBM nuclear capability would be considerable, undermining Mr Trump’s concept of a “muscular” presidency.
It would also be wrong to assume that China would simply stand idly by if the peninsula degenerated into conflict. For deeply held strategic reasons, China chose not to sit out the Korean war in 1950, less than a year after the founding of Mao’s People’s Republic. Indeed it entered that war to prevent an American victory. Deep anxiety about the possibility of a US military presence on its north-eastern land border has been an abiding concern for Chinese security policy for over half a century. So we must also think about the risks of the North Korean crisis triggering a wider conflict between China and the US.
We have entered a new and dangerous period with a deeply unsettling trajectory. What then is to be done? First, Beijing needs to accept that the threat of a unilateral US strike is credible enough to warrant a change in Chinese diplomacy towards North Korea. Second, the US should be clear with Beijing about what is at stake here for China. If China succeeds in bringing about a cessation of North Korea’s nuclear programme and the destruction of its existing arsenal, the US would then accept the much discussed “grand bargain” for the peninsular, including a formal peace treaty with Pyongyang, diplomatic recognition by the US, guarantees for the regime’s future, the possible withdrawal of US forces from South Korea and the removal of sanctions.
Whether the US and China can find a creative diplomatic solution to this crisis is an open question — but one that must be answered now.
Over and over again, organizations are unable to appoint the right leaders. According to academic estimates, the baseline for effective corporate leadership is merely 30%, while in politics, approval ratings oscillate between 25% and 40%. In America, 75% of employees report that their direct line manager is the worst part of their job, and 65% would happily take a pay cut if they could replace their boss with someone better. A recent McKinsey report suggests that fewer than 30% of organizations are able to find the right C-suite leaders, and that newly appointed executives take too long to adapt.
Although there are many reasons for this bleak state of affairs – including over-reliance on intuition at the expense of scientifically valid selection tools – a common problem is organizations’ inability to predict whether leaders will fit in with their culture. Even when organizations are good at assessing leaders’ talents (e.g., their skills, expertise, and generic leadership capabilities), they forget that an essential element of effective leadership is the congruence between leaders’ values and those of the organization, including the leaders’ team. As a result, too many leaders are (correctly) hired on talent but subsequently fired due to poor culture fit.
In our view, there are three critical errors organizations must fix in order to upgrade their selection efforts, namely:
Decode leaders’ motives and values: While expertise and experience are central to leaders’ potential, they are insufficient to predict leadership performance. In fact, even generic personality characteristics, such as integrity, people skills, curiosity, and self-awareness will fail to predict a leader’s fit to the role or organization. A proper understanding of fit must take into account the leader’s motives and values, also known as the “inside” of personality. Motives and values operate as an inner compass, dictating what the leader will like and reward, the type of culture and climate they will strive to create in their teams, and the activities they will see as meaningful and fulfilling.
For example, leaders who value tradition will have a strong sense of what is right and wrong, will prefer hierarchical organizations, and will have little tolerance of disruption and innovation – put them in a creative environment and they will struggle. On the other hand, leaders who value affiliation will have a strong desire to get along with others, will focus on building and maintaining strong interpersonal relations, and on working collaboratively. This means they will not be engaged if their role is too isolated and the company culture is overly individualistic. Finally, altruistic leaders will strive to improve other people’s lives and drive progress in the world, so they will suffer if their organizations are purely driven by profits and disinterested in having a positive social impact.
Understand their own organizational culture: Knowing a leader’s motives and values is pointless unless organizations are also able to decode their own culture. Sadly, most organizations underestimate the importance of accurately profiling their culture so they end up relying on intuitive and unrealistic ideas that say more about what they would like to be than what they actually are. This is why a large number of companies today describe themselves as “entrepreneurial,” “innovative,” “results-oriented,” or “diverse,” even when their own employees perceive a very different type of culture. Well-designed climate surveys, which crowdsource people’s views and experiences of the organizational culture, are a much better indicator of a company’s true values than the aspirational competencies curated by senior executives.
Be realistic about the new leader’s ability to actually change the culture: Although senior leaders are the main shapers of organizational culture, it is hard for newly appointed leaders to reshape the existing culture. That is not to say that organizations should give up and only hire leaders who are a good fit. In fact, moderate misfits who are charismatic and visionary are a company’s best bet for driving top-down change – but the process will be slow and tedious, and these leaders will need to have a great deal of support in order to persist and prevail. The odds of success will be slim, and some leaders may be so disruptive in their intentions that they may harm morale and productivity, or end up disrupting themselves. As Sartre noted, “only the guy who isn’t rowing has time to rock the boat.”
Of course, some leaders manage to perform well in virtually any context. They are able to flex or span between a range of competing competencies, which makes them more adaptive and versatile, as Rob Kaiser’s compelling research shows – but they are an exception rather than the norm. In contrast, for most people, leadership potential will be somewhat context-dependent, so there is no guarantee that a person will lead effectively just because they have been effective in a previous role or organization. Past performance is a good predictor of future performance only when the context remains the same. When it doesn’t, the focus should be on potential and the future rather than performance and the past.
Doctors at Memorial Sloan Kettering Cancer Center borrowed negotiating tactics from Harvard Business School Surgeons at Memorial Sloan Kettering Cancer Center in New York are enlisting techniques taught at Harvard Business School to advise men facing tough decisions about prostate cancer. Behfar Ehdaie liked giving his prostate-cancer patients hopeful news: While they had a low-grade version of the illness, they wouldn’t need immediate treatment, let alone major surgery. Instead, they could be monitored through a process known as active surveillance. But Dr. Ehdaie, a surgeon at Sloan Kettering, found that many men insisted on having radical surgery or radiation—treatments that sometimes had devastating side effects. “It was very frustrating,” Dr. Ehdaie said. “They didn’t see active surveillance as a viable option.” In recent years, a growing body of evidence indicates that men with low-grade early-stage prostate cancer don’t need radical treatment, such as removing or radiating the prostate. The medical consensus is that active surveillance often is the appropriate treatment for small early tumors. Yet despite the data showing that this approach is safe, about 50% of eligible men don’t get it either because they turn it down or their physicians don’t embrace it. Medical experts say many men have been overtreated, as their cancers probably posed little immediate danger. Dr. Ehdaie worried that too many patients were making the wrong decision. While surgery or radiation can be effective for early prostate cancer, potential side effects include sexual dysfunction, urinary incontinence and bowel problems. Dr. Ehdaie confided his frustration to his research mentor and the two men decided to search outside the institution and even outside medicine for experts in behavioral economics and psychology. “It was a very left-field idea to say let’s use behavioral economics to help a doctor explain to a patient what is important,” said Andrew Vickers, a biostatistician who advises Dr. Ehdaie on his research. “But we knew that this was a problem and that surgeons weren’t dealing with it. Doctors often use the completely wrong words.” Dr. Ehdaie’s wife, who has an M.B.A., thought an expert in negotiation theory might help. After hearing about a Harvard Business School professor named Deepak Malhotra who specializes in tough negotiations, Dr. Ehdaie emailed him in December 2013.
Professor Malhotra says he was intrigued. He also believed that many doctor-patient conversations were in fact negotiations—and that doctors had no idea how to negotiate. He had co-authored an article with his brother, an emergency physician, in the Harvard Business Review about the need for doctors and hospitals to negotiate with patients to help them make better care decisions. The piece, published in 2013, said doctors and hospitals had “a dearth of negotiation skills and acumen.”
The professor traveled to Manhattan in 2014 to observe Dr. Ehdaie with his patients. The two hammered out pointers adapted from negotiation theory that doctors could use. For example, Dr. Ehdaie, like many surgeons, would first tell newly diagnosed prostate-cancer patients about surgery as a treatment option, and then discuss radiation; he left active surveillance for last.
Professor Malhotra advised flipping the order. “Instead of going on and on about surgery, and then going on and on about radiation, you give the prominence and salience to active surveillance” he said. The rejiggering was critical to making surveillance—not surgery—the “default option.”
It also was important to explain what active surveillance entailed. While the cancer is left untreated, patients follow a rigorous program of MRI’s, tests and biopsies. Dr. Ehdaie told his patients he would see them every six months. But far from being reassured, patients worried “the cancer could spread in six months.”
Professor Mahotra advised reframing the time period. The doctor should emphasize that a patient’s cancer was growing very slowly, if at all, and it would be safe for him to see them in about five years. But under active surveillance, he would examine them every six months—making some patients feel they were being closely monitored. Finally, Professor Malhotra advised giving patients a concise message to keep in mind when talking with family and friends who might “start questioning” the decision.
As Dr. Ehdaie changed his approach, he saw striking results. Nearly all his patients began accepting active surveillance and rejecting aggressive treatments. That set the stage for a study on the business-school techniques. The study had two goals: to determine whether the approach could be taught to other doctors, and to discover if it improved active-surveillance rates at Sloan Kettering.
The initial challenge was getting half-a-dozen busy cancer surgeons to participate. “If you started going to doctors saying, ‘You have to study up on negotiation theory,’ they would say, ‘Oh, come on,’ ” Dr. Vickers said.
That is why the study was built around a one-hour lecture. Five surgeons attended a talk by Dr. Ehdaie with Professor Malhotra on hand. The number of patients who chose active surveillance afterward was tracked.
Among the participants was Peter Scardino, who was Sloan Kettering’s chairman of surgery for more than a decade. Dr. Scardino was an early believer in active surveillance—back when many physicians didn’t embrace it. Five to 10 years ago, he said, it was hard to get patients on board. Even now, with active surveillance more common, men still agonize. “The majority take an hour discussion and some an hour-long meeting and an hour phone call,” Dr. Scardino said.
Another hurdle: Active surveillance isn’t risk-free. Dr. Scardino tells patients “it is possible the cancer could get out of control before we realize it.” But he notes that surgery and radiation also have risks, including “urinary incontinence and impotence, so it isn’t a question” of an alternative without risk.
The Harvard Business School pointers have brought “more clarity and definition and concise thinking” to how doctors discuss these risks with patients, Dr. Scardino said.
In a report published in June in the journal European Urology, the Sloan Kettering team, along with Professor Malhotra, analyzed the decisions of 1,003 prostate-cancer patients eligible for active surveillance. When they compared 761 patients in a two-year period before the doctors were taught the Harvard methods, with 242 patients who were counseled with the business-school pointers, they found the percentage that chose active surveillance rose to 81% from 69%. In other words, there was a decrease of 30% in “the risk of unnecessary curative treatment.” Even a “minimal intervention can decrease overtreatment,” the paper concluded.
Richard Saler, a patient of Dr. Ehdaie who was diagnosed with low-grade prostate cancer in 2014, admits he is “a worrier” and was inclined to have surgery. “My mindset was ‘Get it out, cut it out,’” he recalled. He spoke with Dr. Ehdaie and recalls when the doctor said, “I would be happy to do your surgery”—but he wouldn’t advise it. After carefully reviewing the data on treatments, Mr. Saler chose active surveillance.
Mr. Saler, who has an M.B.A., didn’t realize his doctor was using classic negotiating tactics and said the conversations with Dr. Ehdaie “never feel like a negotiation.”
“It is not supposed to feel like a negotiation,” Dr. Ehdaie said. “You want to empower patients to make the best decisions for themselves.” Professor Malhotra wrote about the experience with Sloan Kettering in a chapter in his book “Negotiating the Impossible.”
James Eastham, chief of the urology service at Sloan Kettering, said his department incorporates Professor Malhotra’s techniques. “Acceptance rates have increased significantly,” he said. About 90% of eligible Sloan Kettering prostate-cancer patients now select active surveillance.
David Miller, a professor of urology at the University of Michigan in Ann Arbor, said Dr. Ehdaie is changing how doctors can talk to prostate-cancer patients. He wonders if the approach can work beyond Sloan Kettering. “The challenge is how do you bring Deepak Malhotra to care settings in rural parts of the United States,” he said. “What happens at Memorial isn’t necessarily what happens” in clinics across the country. View at the original source
What’s your reason for getting up in the morning? Just trying to answer such a big question might make you want to crawl back into bed. If it does, the Japanese concept of ikigai could help.
Originating from a country with one of the world's oldest populations, the idea is becoming popular outside of Japan as a way to live longer and better.
While there is no direct English translation, ikigai is thought to combine the Japanese words ikiru, meaning “to live”, and kai, meaning “the realization of what one hopes for”. Together these definitions create the concept of “a reason to live” or the idea of having a purpose in life.
Ikigai also has historic links: gai originates from the word kai, which means shell. These were considered very valuable during the Heian period (794 to 1185), according to Akihiro Hasegawa, a clinical psychologist and associate professor at Toyo Eiwa University, adding a sense of "value in living".
To find this reason or purpose, experts recommend starting with four questions:
What do you love?
What are you good at?
What does the world need from you?
What can you get paid for?
Finding the answers and a balance between these four areas could be a route to ikigai for Westerners looking for a quick interpretation of this philosophy. But in Japan, ikigai is a slower process and often has nothing to do with work or income.
In a 2010 survey of 2,000 Japanese men and women, just 31% of participants cited work as their ikigai.
Gordon Matthews, professor of anthropology at the Chinese University of Hong Kong and author of What Makes Life Worth Living?: How Japanese and Americans Make Sense of Their Worlds, told the Telegraph that how people understand ikigai can, in fact, often be mapped to two other Japanese ideas – ittaikan and jiko jitsugen. Itaikkan refers to “a sense of oneness with, or commitment to, a group or role”, while jiko jitsugen relates more to self-realization.
Matthews says that ikigai will likely lead to a better life “because you will have something to live for”, but warns against viewing ikigai as a lifestyle choice: “Ikigai is not something grand or extraordinary. It’s something pretty matter-of-fact.”
Raghuram Rajan makes the disclosure in his latest book -- I do what I do – which is a compilation of speeches he delivered on a wide range of issues as the RBI governor.
Former RBI governor Raghuram Rajan has revealed that he did not favour demonetisation as he felt the short term economic costs associated with such a disruptive decision would outweigh any longer term benefits from it.
Rajan makes the disclosure in his latest book – I do what I do – which is a compilation of speeches he delivered on wide range of issues as the RBI governor. Although he maintains the book is not a tell-all, the short introductions and postscripts accompanying the pieces offer fascinating insights into his uneasy relationship and differences with the present government.
“At no point during my term was the RBI asked to make a decision on demonetisation,” Rajan has said, putting to rest speculation that preparations for scrapping high-value banknotes got underway many months before Prime Minister Narendra Modi made the surprise announcement on November 8.
This is the first time the former RBI governor has spoken on demonetisation since demitting office on September 3 last year. Rajan, who now teaches economics at University of Chicago, said he chose not to speak on India for a year because he didn’t want to “intrude on his successor’s initial engagement with the public”.
“I was asked by the government in February 2016 for my view on demonetisation, which I gave orally. Although there might be long-term benefits, I felt the likely short-term economic costs would outweigh them,” Rajan wrote.
“I made these views known in no uncertain terms.”
He didn’t elaborate on the short-term costs or the possible long-term benefits, but as the RBI governor he “felt there were alternatives to achieve the main goals.”
Latest government data showed the November 8 decision to scrap Rs 1,000 and Rs 500 notes, sucking out 86% of cash circulating in the system, has had a lingering impact on the economy. The growth of GDP slowed sharply from 7% in October-December quarter to 6.1% in January-March and 5.7% in April-June, primarily because of the cash squeeze that weakened consumer spending and discouraged businesses from making new investments.
The government, however, maintains that the economic slowdown has not been entirely because of demonetisation. In an interview to Times of India, published Sunday, Rajan described the deceleration in GDP as “the costs of demonetisation upfront.”
“Let us not mince words about it – GDP suffered. The estimates I have seen range from 1 to 2 percentage points, and that’s a lot of money – over Rs 2 lakh crore and may be approaching Rs 2.5 lakh crore,” he said in the interview.
Advertisement:Replay Ad
Ads by ZINC
“I think the people who mooted this must have thought some of it would be compensated if money didn’t come back into the system,” he said referring to illegal wealth held in cash.
The government’s expectation was that at least Rs 3 lakh crore worth black money held in cash won’t return, significantly reducing the liability of the central bank and boosting its profits, which could be used for new investments and developmental work.
But RBI data, available now, shows 99% of the high-value notes have returned to the banking system, meaning hoarders of black money found a way to legitimise most of their dodgy cash. “The fact that 99% has been deposited certainly does suggest that aim (of curbing black money) has not been met,” Rajan said in the interview.
Despite his reservations, Rajan wrote in his book, the RBI was asked to prepare a note, which it did and handed to the government.
The RBI note, he said, “outlined potential costs and benefits of demonetisation, as well as alternatives that could achieve similar aims. If the government, on weighing the pros and cons, still decided to go ahead with demonetisation, the note outlined the preparation that would be needed, and the time that preparation would take.”
“The RBI flagged what would happen if preparation was inadequate,” he wrote.
The government subsequently set up a committee to consider the issue. The central bank was represented on the committee by its deputy governor in charge of currency, Rajan wrote, possibly implying he did not attend these meetings.
The current leadership of the central bank could not be reached for comments on Rajan’s account. Phone calls to the RBI spokesperson went unanswered.
Rajan did not detail the contents of the note RBI had submitted to the government. Modi’s radical move was slammed by the opposition as ill-conceived and poorly executed. It took banks much longer than the government had expected to tide over the cash crisis. Frequent changes in cash withdrawal rules added to chaos and inconvenience that lasted far longer than the 50 days the PM had sought to restore normalcy.
Still, Modi won popular support for his move, winning a landslide victory in crucial elections in Uttar Pradesh. Most people, especially the poor, backed his decision as a frontal attack on black money. View at the original source
..................................................... Image credit : Shyam's Imagination Library.. Shyam's take on this...... Is human brain a better delegator of work than the human being????
Have Human Cybernetics not yet reached a stage that could match the one employed by the Human Brain so expertly and so enviously??
The answer is “No, Not yet at least…”
Whenever the human being gets ego centric and boasts of something new and extraordinary in cybernetics. Human Brain has a surprise, and come out with examples of much higher degrees of cybernetics.
I have always strived to create a connectivity between the god created management system, in which the brain constantly and expertly manages the intelligent dissemination and storage of new information, and archiving of not so relevant one...
This article is sort of a fodder and food for thought for my thought process…..
Harvard Medical school has some new findings in their research, that debunk the earlier theory of Rigidity and the frigidity of the Brain in task delegation. The plasticity of the neurons and delegation activity of brain are the new facts that have come to light in this.
Clustering of information in order of relevance, importance, and age. There is also re-tasking of neuron. In case the relevance of a particular information reduces, the neuron is relieved of keeping the memory of that task and re-tasked to take care of new information.
The tasking, monitoring, evaluation, and res-tasking are things we human beings often boast off, but are too far off from a stage of perfection….
These are just my thoughts, just loud thinking…..
Now the article....
The human brain has a region of cells responsible for linking sensory cues to actions and behaviors and cataloging the link as a memory. Cells that form these links have been deemed highly stable and fixed.
Now, the findings of a Harvard Medical School study challenge that model, revealing that the neurons responsible for such tasks may be less stable, yet more flexible than previously believed. The results, published Aug. 17 in the journal Cell, cast doubt on the traditional notion that memory formation involves hardwiring information into the brain in a fixed and highly stable pattern.
The researchers say their results point to a critical plasticity in neuronal networks that ensures easier integration of new information. Such plasticity, the researchers said, allows neuronal networks to more easily incorporate new learning, eliminating the need to form new links to separate neurons every time. Furthermore, they said, once a memory is no longer needed, neurons can be more easily reassigned to other important tasks.
“Our experiments point to far less stability in neurons that link sensory cues to action than we would have expected and suggest the presence of much more flexibility, and indeed a sort of neuronal efficiency,” said study senior author Chris Harvey, an assistant professor of neurobiology at Harvard Medical School. “We believe this trade-off ensures the delicate balance between the ability to incorporate new information while preserving old memories.”
The Harvard Medical School study involved experiments with mice repeatedly running through a virtual maze over the course of a month. Analyzing images of brain activity in a brain region involved in navigational decision making, the researchers noted that neurons did not stabilize into a pattern. Instead, the set of neurons forming the mice’s maze-running memories kept changing for the duration of the study. In fact, neurons kept switching roles in the memory pattern or left it altogether, only to be replaced by other neurons.
“Individual neurons tended to have streaks where they’d do the same thing for a few days, then switch,” Harvey said. “Over the course of weeks, we began to see shifts in the overall pattern of neurons.”
The experiments are part of the research team’s ongoing efforts to unravel the mysteries of memory formation and, specifically, how the brain captures external cues and behaviors to perform recurring tasks such as navigating a space using landmarks. Imagine a person driving a familiar route to the grocery store who sees the bank and turns right at that corner without even having to think about it consciously.
To mimic that process, mice in the study were trained to run down a virtual passage—a computer-generated maze displayed on large screens in front of a treadmill—and turn right if they were given a black cue or left if they were given a white cue. Researchers imaged hundreds of neurons in the part of the brain responsible for spatial decision making as the mice were galloping down the virtual maze.
Once the navigational links were firmly established in the mice’s brains over the course of a few weeks, the researchers expected the activity of the neurons to look the same from day to day. During maze runs that occurred within 24 hours of each other that was, indeed, the case. Neurons that activated in response to the white cue could be distinguished from neurons that activated in response to the black cue. However, over the course of several weeks the line between cues in individual neurons blurred and the memory pattern began to drift across neurons, the researchers observed. A neuron that had been associated with the black cue would lose its specialization and be replaced by another, or it might even become associated with the opposite white cue. This came as a surprise to the researchers.
“We were so sure that the neurons would be doing the same thing every day that we designed the study expecting to use the stable pattern as a baseline,” said study first author Laura Driscoll, a graduate student in the neurobiology department. “After we realized the neurons were changing roles, we had to rethink parts of the study.”
The researchers tested how the pattern changed when they added shapes as a third cue while the mice were navigating the maze. After some reassignment of individual neurons as the mice learned the new cue, the researchers found very little change to the overall activity pattern. This finding supports the idea that networks of neurons storing memories stay flexible in order to incorporate new learning, the researchers say.
The researchers hypothesize that neuronal stability may differ across various brain regions, likely depending on how often the skill or memory they encode needs to be modified. For a task like navigation, which frequently requires the brain to incorporate new information, it would make sense that the neurons remain flexible, Harvey said. However, more instinctual physical responses, such as blinking, may be hardwired with little neuronal drift over time.
The results provide a fascinating early glimpse into the complexities of memory formation, Driscoll said. To elucidate the big picture of memory formation and storage across brain regions, researchers say they hope to study other areas of the brain involved with different types of decision-making and memories.
“I hope this research inspires people to think of memory as something that is not static,” Harvey said. “Memories are active and integrally connected to the process of learning.” This work was supported by a Burroughs-Wellcome Fund Career Award at the Scientific Interface, the Searle Scholars Program, the New York Stem Cell Foundation, the Alfred P. Sloan Research Foundation, a NARSAD Brain and Behavior Research Foundation Young Investigator Award, NIH grants from the NIMH BRAINS program (R01MH107620) and NINDS (R01NS089521), an Armenise-Harvard Foundation Junior Faculty Grant, an Edward R. and Anne G. Lefler Center Predoctoral Fellowship and Junior Faculty Award, the Albert J. Ryan Fellowship, and the Stuart H.Q. & Victoria Quan Fellowship. View at the original source
It seems like a major part of keeping kids healthy these days is managing their microbial exposure. On the one hand, we’re told that letting our kids get dirty and tempering our use of hand sanitizer can help cultivate a healthy population of good microbes in and on the body, which is associated with lower rates of chronic maladies like asthma and allergies. On the other hand, we know that among all the benign and beneficial bacteria in the world lurk some that are deadly, causing diseases such as whooping cough, pneumonia and meningitis.
To treat these diseases, we need antibiotics, but the downside is that antibiotics indiscriminately kill bacteria in the body, including the ones that contribute to our health. Meanwhile, every course of antibiotics gives bacteria that are resistant to the drugs a chance to grow and thrive. That makes for more antibiotic-resistant infections, all of which are harder to treat and some of which can’t be treated at all.
Ideally, we want to protect our kids from deadly bacteria without disturbing the good ones or worsening the trend of antibiotic resistance. And this is exactly what vaccines do. They give us exposure to the pathogen — be it bacterial or viral — in a weakened, killed or partial form so that we can develop immunity to it without getting the full-blown illness. If we’re exposed to the real thing later, our bodies have antibodies specific to that pathogen ready to fight back. No antibiotics needed, and our friendly microbes can continue to live in peace. But when parents choose not to vaccinate their kids, they’re increasing the kids’ chances of not only becoming seriously ill, but also of needing antibiotic treatment and other medical interventions down the road.
Dr. Joel Amundson, a pediatrician in Portland, Oregon, finds himself frequently talking about vaccines and antibiotics in the same breath. Oregon has one of the lowest immunization rates in the nation, and Amundson said many of the parents he counsels want to keep their kids “all-natural” and see vaccines as an unnecessary medical intervention. But when he explains that vaccines are a tool for decreasing medical interventions, including antibiotic use, that often changes their perspective. “That’s a huge benefit to my families,” he said, “It definitely has them more interested in doing vaccines when they understand that.”
Some parents who are reluctant to vaccinate worry about side effects, and though some kids will experience short-lived, minor reactions such as swelling at the injection site, serious side effects are extremely rare. Side effects from antibiotics, including diarrhea, rashes and allergic reactions, are generally more common and severe, Amundson said. “I see far more harm from antibiotics than I do from vaccines, by a huge margin. It’s not subtle,” he said.
Of course, when a person has a serious bacterial infection, the benefits of antibiotics far outweigh those risks, because these diseases can be deadly. “When we need them, we really need them,” said Janet Gilsdorf, professor emerita of pediatric infectious diseases at the University of Michigan. But in a world where antibiotic-resistant infections are thought to kill 50,000 people each year in the U.S. and Europe alone, a problem that the United Nations has called “the greatest and most urgent global risk,” reducing our use of antibiotics helps preserve their value. “The fewer infections we have, the fewer antibiotics we need to use, and we know that the use of antibiotics is what drives antibiotic resistance,” Gilsdorf said.
Vaccines have prevented millions of illnesses
Estimated number of infections prevented by vaccines over the lifespan of children born in the U.S. in 2009
INFECTIOUS DISEASE
CAUSED BY
CASES PREVENTED
Varicella
Virus
3,942,546
–
Measles
Virus
3,835,825
–
Pertussis
Bacteria
2,950,836
–
Pneumococcus-related diseases
Bacteria
2,323,952
–
Mumps
Virus
2,312,275
–
Rubella
Virus
1,981,066
–
Rotavirus
Virus
1,582,940
–
Diphtheria
Bacteria
275,028
–
HepB
Virus
239,993
–
HepA
Virus
153,164
–
Polio
Virus
67,463
–
Hib
Bacteria
19,606
–
Congenital rubella syndrome
Virus
632
–
Tetanus
Bacteria
169
–
We don’t yet have research on whether emphasizing this benefit of vaccines might encourage parents to immunize their kids. While the vast majority of parents vaccinate their kids on schedule, the number of parents who are reluctant to do so does seems to be increasing in the U.S., despite a mountain of evidence supporting the efficacy and safety of vaccines. Reasons for parents’ concerns about vaccines are varied, and each type of concern will likely need to be addressed differently to improve vaccination rates. But there’s some evidence that parents are becoming more aware of the problem of antibiotic resistance, and a study of Austrian adults found that those with more knowledge about antibiotics were more likely to get the flu vaccine.
There’s no question that vaccines have dramatically reduced the burden of disease. A study published in 2014 estimated that among U.S. children born in 2009, following the recommended childhood vaccine schedule (not including the flu vaccine) would prevent 20 million cases of disease across their lifespans, and about 30 percent of these are bacterial diseases that would likely require antibiotic treatment. These are diseases like diphtheria and pertussis, both of which were major causes of childhood illness and death before their vaccines were developed in the first half of the 20th century.
More recently, the vaccine for Haemophilus influenzae type b (Hib), which the Food and Drug Administration approved for use in toddlers starting in 1985 and infants in 1990, nearly eliminated the dangerous blood and brain infections caused by this bacteria.
Pneumococcal vaccines have also reduced our dependence on antibiotics. The first was recommended in the U.S. for infants and young children in 2000, followed in 2010 by an updated version covering more strains of the bug. Like Hib, pneumococcus bacteria can cause pneumonia and invasive blood and brain infections, but it’s also a major cause of ear infections, which are one of the biggest reasons that children are prescribed antibiotics. Before the vaccine was added to the infant immunization schedule, up to 40 percent of invasive pneumococcal infections — meaning infections that spread to parts of the body, such as the bloodstream, that are normally germ-free — were resistant to at least one antibiotic, making them more difficult and costly to treat. The first pneumococcus vaccine decreased antibiotic-resistant invasive pneumococcal infections in young children by 81 percent, and the second vaccine caused an additional 61 percent drop. (These studies looked at different age groups, however; the first included only children younger than 2, and the second looked at children up to age 4.)
Pneumococcal vaccines have also reduced our dependence on antibiotics. The first was recommended in the U.S. for infants and young children in 2000, followed in 2010 by an updated version covering more strains of the bug. Like Hib, pneumococcus bacteria can cause pneumonia and invasive blood and brain infections, but it’s also a major cause of ear infections, which are one of the biggest reasons that children are prescribed antibiotics. Before the vaccine was added to the infant immunization schedule, up to 40 percent of invasive pneumococcal infections — meaning infections that spread to parts of the body, such as the bloodstream, that are normally germ-free — were resistant to at least one antibiotic, making them more difficult and costly to treat. The first pneumococcus vaccine decreased antibiotic-resistant invasive pneumococcal infections in young children by 81 percent, and the second vaccine caused an additional 61 percent drop. (These studies looked at different age groups, however; the first included only children younger than 2, and the second looked at children up to age 4.) The U.S., Israel and the U.K. have also observed big drops in kids’ ear infections coinciding with the introduction of pneumococcal vaccines. (Other factors, such as increased breastfeeding and tightened diagnostic criteria for ear infections, have likely contributed to these improvements, but researchers believe that the vaccines have played an important role.) In a paper published last year, researchers estimated that making the pneumococcal vaccine universally available to children in the 75 countries they looked at could not only prevent disease but also avert 11.4 million days of antibiotic treatment each year, a 47 percent drop in current antibiotic use for pneumonia.
Less obviously, vaccines that protect against illnesses caused by viruses rather than bacteria can also help cut antibiotic use. For example, influenza is viral, but flu season always brings an uptick in antibiotic prescriptions. In many cases, the antibiotics are being inappropriately prescribed, but some are necessary treatments for secondary bacterial infections, like pneumonia and ear infections, that can move in when a person’s immune system is busy fighting the virus. When Ontario, Canada, started offering free flu vaccines, the province’s rate of antibiotic prescriptions associated with the flu dropped by 64 percent.
The vaccine against measles, another viral infection, also probably decreases antibiotic use. A 2015 paper showed that a measles infection weakens a person’s immune system for two to three years, which explains why the measles vaccine reduces childhood mortality by 30 percent to 50 percent in poor countries, which can’t be explained by measles prevention alone. “Not having measles is a really good thing for your immune system in terms of preventing other infections,” said Marc Lipsitch, professor of epidemiology at Harvard T.H. Chan School of Public Health.
In a paper published last year, Lipsitch argued that development of new vaccines should be considered an important strategy in the fight against antibiotic-resistant bacteria. He believes that it would be most useful to have vaccines against certain bacterial strains that patients tend to pick up in hospitals — those strains are often resistant to multiple antibiotics. A more effective flu vaccine and a vaccine for respiratory syncytial virus, known as RSV, which sends more than 57,000 young children and 177,000 elderly people in the U.S. to the hospital each year, could also reduce antibiotic use. Potential vaccines for a number of these diseases are in various stages of clinical trials.
Lipsitch envisions vaccines that go even further. “I actually think one of the most interesting ideas I’ve had is the idea of using vaccines directly to target [antibiotic] resistant bacteria, not just all bacteria, but directly aiming at the targets that are the resistant genes.” This type of vaccine would be especially helpful for bacteria like pneumococcus and Staphylococcus aureus, which are so ubiquitous that they’re unlikely to be eliminated; keeping drug resistance at bay would help us coexist with them more peacefully. “The idea of these resistance-targeted vaccines is to try to make life extra hard for the resistant organisms,” Lipsitch said.
But would it be tough to sell people on more vaccines for both kids and adults when some people are refusing to get the vaccines we already have? “I think it ought to be a pretty easy sell, actually,” said David Salisbury, associate fellow at the Chatham House Centre on Global Health Security in London and former director of immunization at the U.K. Department of Health. “Imagine if an ear infection, which happens so commonly in children, became untreatable. You can fantasize about false risks of the vaccines, but they turn to nothing when you compare them with untreatable infections. Would you seriously prefer your child not to have a vaccine and risk an infection to which there was no treatment?”
A global challenge as big as antibiotic resistance will require multiple solutions, including reducing the use of antibiotics in agriculture and developing new antibiotics, but Salisbury says that vaccines deserve more attention and investment. Gilsdorf is on board with that. “What we need is more good science, which means we need more funding for the National Institutes of Health, the National Science Foundation, and these federal agencies that support scientists to learn the nitty-gritty of these bacteria,” she said.
The Reserve Bank of India (RBI) on Wednesday said it estimated that people had returned almost 99 per cent of the scrapped Rs 1,000 and Rs 500 notes after demonetisation, effectively putting a question mark over the government gaining handsomely by the unreturned money turning into a special dividend by the central bank.
In its annual report, the RBI also said the face value of fake high-value notes was minuscule at Rs 41 crore.
The central bank said people had returned Rs 15.28 lakh crore of the Rs 15.44 lakh crore banned currency, or 98.96 per cent of the scrapped Rs 500 and Rs 1,000 notes, to the banking system.
“Subject to future corrections based on the verification process when completed, the estimated value of Specified Bank Notes received as on June 30, 2017, is Rs 15.28 lakh crore,” the annual report said.
The old notes came to the RBI either directly or from bank branches and post offices through the currency chest mechanism.
Some of these notes were still lying in currency chests, the RBI said, adding it could only estimate the value of the notes and could not provide an accurate figure.
The RBI data showed the unreturned Rs 1,000 notes in March 2017 amounted to Rs 8,900 crore. The segregation of old and new Rs 500 notes were not that clear. The RBI incurred a cost of Rs 7,965 crore in printing notes in 2016-17, against Rs 3,421 crore incurred in the previous year. The central bank also increased its provisions by over Rs 13,000 crore in order to boost its contingency reserves, a practice it was adopting after three financial years.
The net effect was that the dividend paid to the government was halved to Rs 30,659 crore in the July-June financial year 2016-17. Prime Minister Narendra Modi had on November 8, 2016, announced demonetisation in a televised address, rendering 86 per cent of the currency in circulation invalid. The nation subsequently queued up at bank branches and automated teller machines as the central bank struggled to supply new notes. About Rs 15.3 lakh crore of notes are in circulation in June, against the pre-demonetisation level of Rs 17.9 lakh crore. Economists said the government may have overestimated the extent of black money in the system, but increased tax collection should be counted as a long-term gain.
“Data analytics of deposits have thrown up unusual patterns. Previously we did not know who held black money. Now we do, and this is a clear gain,” said the chief economist with a private bank. The number of suspicious transaction reports by banking system increased by 345 per cent, which could possibly lead to an increase in future tax revenues. Coupled with the goods and services tax, this will help in improving tax realisation,” said Soumya Kanti Ghosh, group chief economist, State Bank of India.
The total number of suspicious transactions detected in 2016-17 was 473,003, up from 106,273 in 2015-16 across banks, other financial institutions and intermediaries . In banks alone, the number of suspicious transactions detected was 361,214, against 61,361 a year ago.
This is the first time since 1952-53 that reserve money for the whole year contracted, by 13 per cent. The RBI incurred a loss in seigniorage, the profit made by the central bank on account of currency issuance.A committee headed by the RBI board member Y H Malegam had suggested the central bank did not need to build additional reserves for three years starting 2012-13.
This being the fourth financial year, the RBI increased its provisions to Rs 13,190 crore and allocated them in various reserves. “In value terms, the share of Rs 500 and above banknotes, which had together accounted for 86.4 per cent of the total value of banknotes in circulation at end-March 2016, stood at 73.4 per cent at end-March 2017. The share of newly introduced Rs 2,000 banknotes in the total value of banknotes in circulation was 50.2 per cent at end-March 2017,” the RBI said.
In volume terms, Rs 10 and Rs 100 banknotes constituted 62 per cent of the total banknotes in circulation at end-March 2017, against 53.0 per cent at end-March 2016.
The RBI said processing and destruction of old Rs 500 and Rs 1,000 notes kept in various currency chests and regional offices of the RBI “pose a challenge.”
“In this regard, the agenda for 2017-18 includes the procurement of Currency Verification and Processing System/Shredding and Briquetting Systems.” The RBI’s agenda also include introduction of new series banknotes in other denominations; procurement of security features; and “introduction of varnished banknotes.” View at the original source
“Prime Minister, what would you like to tell us about today?” This opening gambit from a UK television presenter in the 1950s, part of a documentary I watched recently about the history of the political interview, reminded me how far we’ve come since that innocent age.
Trust was more prevalent then. It allowed politicians to go unchallenged and the media to remain deferential. Business, meanwhile, was often faceless, existing at a distance from the people it served. Of course, all three institutions have evolved dramatically since then. But not far enough, as far as the public is concerned. Edelman’s latest Trust Barometer showed trust in all three – plus a fourth, NGOs – declining for the first time in the study’s history
The breakdown in trust is widespread and corrosive, but there is a chink of hope for business. Of the four institutions, it is viewed as the only one that can make a difference.
Three out of four respondents agree a company can take actions to both increase profits and improve economic and social conditions in the community where it operates.
This suggests brands have the power to rebuild trust through true engagement. I believe that needs to be done in two ways: by demonstrating a social purpose, and by taking part in genuinely two-way dialogue.
Show you have a stake in the future
For millennials, in particular, brand value is increasingly about trust and respect rather than simply price. Edelman found that customers’ chief expectation of financial brands – rated above even keeping their families and data safe – is that they make a positive contribution to society.
“It’s not necessarily check book philanthropy, but doing a good job and investing for the future,” explains Deidre H. Campbell, Edelman’s Global Chair of Financial Services.
Chase’s Mission Main Street program is a prime example, celebrating small US businesses and offering grant support to the most promising. Similarly, American Express’s Small Business Saturday initiative harnesses digital connectivity to link small firms and communities.
Citi set out to support a different demographic with its professional women’s community, launched in partnership with LinkedIn. It provides access, networks and best practice to support women in their careers.
Join the conversation
Being part of your community also demands a willingness to interact with customers, rather than talk at them. Perhaps it’s for this reason that trust in the mainstream media, which lacks two-way channels, is in decline.
LinkedIn, for example, is now trusted as highly as venerable media brands such as the Wall Street Journal, according to our research. View at the original source
World renowned physicist Dr. Michio Kaku made a shocking confession on live TV when he admitted that HAARP is responsible for the recent spate of hurricanes. In an interview aired by CBS, Dr. Kaku admitted that recent ‘man-made’ hurricanes have been the result of a government weather modification program in which the skies were sprayed with nano particles and storms then “activated” through the use of “lasers”.
In the interview (below), Michio Kaku discusses the history of weather modification, before the CBS crew stop him in his tracks.
The High-Frequency Active Auroral Research Program (HAARP) was created in the early 1990’s as part of an ionospheric research program jointly funded by the U.S. Air Force, the U.S. Navy, the University of Alaska Fairbanks, and the Defense Advanced Research Projects Agency (DARPA).
According to government officials, HAARP allows the military to modify and weaponize the weather, by triggering earthquakes, floods, and hurricanes.
Anongroup.org reports: One detail in a plethora of academic papers and patents about altering the weather with electromagnetic energy and conductive particles in the stratosphere, research published in the Proceedings of the National Academy of Sciences said the “laser beams” can create plasma channels in air, causing ice to form. According to Professor Wolf Kasparian:
“Under the conditions of a typical storm cloud, in which ice and supercooled water coexist, no direct influence of the plasma channels on ice formation or precipitation processes could be detected. Under conditions typical for thin cirrus ice clouds, however, the plasma channels induced a surprisingly strong effect of ice multiplication.
Within a few minutes, the laser action led to a strong enhancement of the total ice particle number density in the chamber by up to a factor of 100, even though only a 10−9 fraction of the chamber volume was exposed to the plasma channels.
The newly formed ice particles quickly reduced the water vapor pressure to ice saturation, thereby increasing the cloud optical thickness by up to three orders of magnitude.”
To really understand geoengineering, researchers have identified defense contractors Raytheon, BAE Systems, and corporations such as General Electric as being heavily involved with geoengineering. According to Peter A. Kirby, Massachusetts has historically been a center of geoengineering research.
With the anomalous hurricanes currently ravaging the Americas, floods destroying India, and wildfires destroying the Pacific Northwest, weather warfare is a topic on the public consciousness right now. Please share this with as many people as possible.
The next billion internet users are ditching computers for pocket-friendly phones. Globally, half of all internet users got online in February 2017 using mobile devices, and over 45% visited the web on desktops during the same time period. In countries like the UK and US, where more than eight in 10 have access to the internet, people got online using phones over a third of the time. In India, the split was leaning heavily toward mobile use: Indians accessed the internet through their mobiles nearly 80% of the time.
“Our research confirms that Indians adore their mobiles for surfing the internet,” Tarak Desai of StatCounter, Mumbai, said. “Internet usage by mobile in India is striking compared to that in most other countries.” Desai attributed part of the success to the latest entrant to India’s $50-billion telecom sector: Reliance Jio. The Mukesh Ambani-led venture lured over 100 million subscribers by offering one gigabyte (GB) a day of free 4G. It also ignited price wars that drove data prices in the country down by nearly 20%.
Besides data, smartphones, too, have become more affordable amid competition. Recently, Chinese brands have won over Indian audiences by manufacturing locally to drive down costs, creating smartphones with bigger screens and an improved user interface, spending heavily on marketing, setting up retail stores, and even adding local language support. At the end of last year, four out of the top five brands of smartphone shipments in the country—Vivo, Xiaomi, Lenovo, and Oppo—were Chinese.
For those reluctant to switch to smartphones, 4G feature phones with long battery lives and simple, easy-to-use designs serve as the online connection. Close to 200 million 4G feature phones are projected to sell in India over the next five years, according to Counterpoint Research. Data shows that India has clearly leapfrogged the desktop generation. The country holds the title for mobile internet usage among G20 nations. Others like Indonesia and South Africa, where desktops are significantly more expensive than mobile phones and power issues are widespread, are close behind.