Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


    0 0




    Shyam's Insights :

    Behavioral economics, studies the effects of psychological, social, cognitive, and emotional factors on the economic decisions of individuals and institutions,more generally, of the impact of different kinds of behavior, in different environments of varying experimental values.

    Behavioral economics doesn't recognise  the bounds of rationality, and often times recognises and gives credibility to unbound irrational economic behavioral models. These typically integrate insights from psychology, neuroscience and microeconomic theory; in so doing, these behavioral models cover a range of concepts, methods, and fields.

    Now the article

    Richard H. Thaler, the “father of behavioral economics,” has this week won the 2017 Nobel Prize in Economics for his work in that field. Thaler has long been known for challenging a foundational concept in mainstream economics — namely, that people by-and-large behave rationally when making purchasing and financial decisions. Thaler’s research upended the conventional wisdom and showed that human decisions are sometimes less rational than assumed, and that psychology in general — and concepts such as impulsiveness — influence many consumer choices in often-predictable ways.

    Once considered an outlier, behavioral economics today has become part of generally accepted economic thinking, in large part thanks to Thaler’s ideas. His research also has immediate practical implications. One of Thaler’s big ideas – his “nudge theory”  – suggests that the government and corporations, to take one example, can greatly influence levels of retirement savings with unobtrusive paperwork changes that make higher levels of savings an opt-out rather than an op-in choice. In fact, he co-authored a book, Nudge: Improving Decisions About Health, Wealth and Happiness, which became a best-seller.

    In this Knowledge@Wharton interview, Katherine Milkman, a Wharton professor of operations, information and decisions — and a behavioral economist herself — discusses Thaler’s influence in economics and the practical applications of his ideas already underway. She attributes part of his success to his great clarity in thinking and in writing. She had interviewed professor Thaler for Knowledge@Wharton in 2016 regarding his then-new book, Misbehaving: The Making of Behavioral Economics.



    An edited transcript of the conversation follows.




    Milkman: Standard economics makes assumptions about the rationality of all of us, and essentially assumes that we all make decisions like perfect decision-making machines, like Captain Spock from Star Trek who can process information at the speed of light, and crunch numbers, come up with exactly the right solution.
    “Humans are not perfectly rational…. We have impulse-control problems, we have social preferences. We care about what happens to other people instead of being entirely selfish.”
    In reality, that’s not the way humans make decisions. We often make mistakes. And Richard Thaler’s major contribution to economics was to introduce a series of predictable ways that people make errors, and to make it acceptable to begin modeling those kinds of deviations to make for a richer and more accurate description of human behavior in the field of economics.

    Knowledge@Wharton: What would be a classic example of a decision that an economist would expect someone to make rationally, but in fact they don’t?

    Milkman: Well, a great example from Richard’s own work relates to self-control challenges. And he has talked about the cashew problem, or the challenge, if you’re at a dinner party, of resisting the bowl of cashews that you know will spoil your dinner.

    A traditional economist would expect that’s not a challenge. No one should have any difficulty withstanding that temptation. They should know it will spoil their dinner; we don’t need the cashews. And Thaler noted that, in fact, everyone struggles with this, and everyone breathes a sigh of relief when a host puts away that bowl of cashews so they’re not reachable and they’re not in front of everyone anymore.

    It seems small, but it actually highlights a major challenge for humans with self-control, which can perhaps explain the obesity epidemic, and under-saving for retirement, the under-education among many groups. The range of things that this simple observation can begin to shed light on is just extraordinary. And that’s only one of his contributions.

    Knowledge@Wharton: It’s this idea that human beings happen to be impulsive a lot of times, and that should be taken into account. They aren’t sitting there with calculators all the time figuring out an economic decision or a financial decision.

    Milkman: That’s exactly right. That’s the contribution that Richard Thaler made to economics in a nutshell: that humans are not perfectly rational, sitting there with calculators. We have impulse-control problems, we have social preferences. We care about what happens to other people instead of being entirely selfish. We are limited in our rationality in a number of ways, and he has pointed that out over the last 50 years, and highlighted opportunities for policy makers to improve the lives of billions of people by taking these insights into account.

    Knowledge@Wharton: It appears a little odd that these ideas were consigned to the corner for so long. Now people are talking about them more.

    Milkman: I think that’s right. At some level it took a personality like Richard Thaler; he’s someone who likes to break the mold and misbehave, which is the title of his autobiography. It took someone like that to point out the absurdity of the assumptions in a standard economic model, and help change the assumptions so that we could start doing the science better.

    Knowledge@Wharton: And those standard models, they worked really well a lot of the time, maybe even most of the time — it’s just that when they didn’t work, it could be a major failing. Is that right?

    Milkman: I think that’s right. And it also meant there was an opportunity for improvement. So even if they were working fairly well much of the time, they weren’t actually fully accurate. And so the more accurate we can make them, the more opportunities we have to make better policy and so on.

    Knowledge@Wharton: Let’s talk about some of the practical applications of his ideas. Thaler was a government advisor not long ago. Perhaps you could tell us about his contributions and about how he has a lot of practical ideas for how his concepts can be put to use.

    Knowledge@Wharton: Let’s talk about some of the practical applications of his ideas. Thaler was a government advisor not long ago. Perhaps you could tell us about his contributions and about how he has a lot of practical ideas for how his concepts can be put to use.


    “It took a personality like Richard Thaler … to point out the absurdity of the assumptions in a standard economic model.”
    What this means is that whoever laid out the cafeteria was actually, whether or not they meant to, influencing our choices dramatically depending on where they place certain foods. The first thing we encounter is much more likely to end up on our plate, as I just said, and therefore whatever they place first, whether it was broccoli or chocolate cake, was more likely to end up on our tray.

    There’s no such thing as neutral choice architecture. Thaler pointed out that we should try to architect environments where people are making decisions in a way that, in his words, nudges us towards better choices. So why not put the broccoli first and the chocolate cake last in order to help people be healthier in a cafeteria?

    Thaler also talks a lot about how to improve retirement savings outcomes using similar understandings of psychology. For instance, why not assume that people want to save for retirement and allow them to opt out rather than what was historically typically done when you signed up or started working at a new employer, which was to assume people didn’t want to enroll unless they said please sign me up for the retirement savings program. With small changes [in] the environments where we make choices, that don’t restrict choice in any way … we can have a huge impact on human life for the better.

    Knowledge@Wharton: Another interesting idea — along the same lines — is that you agree in advance that when you get a raise in the future, a bigger chunk of that would go into your retirement than just the standard percentage based on what you had chosen in advance. It turns out through the “miracle” of compounding interest that these things can make a huge difference at retirement.

    Milkman: That’s right. And you had specifically asked about how governments were using this. I also want to note many folks in governments read the book Nudge, and there are now literally hundreds of offices in governments around the world that have developed what they lovingly refer to as Nudge Units, where they’re trying these insights from this field to try to improve outcomes for citizens.

    And we have one in the U.S. government, we have one that was founded I believe in 2015 if I’m getting my dates right. And before that, the very first Nudge Unit came in the U.K. under David Cameron, and it was literally referred to as the Nudge Unit. Now it’s called the Behavioral Insights Team and they have operations in the U.S. and in the UK. They’re helping many cities in the U.S. improve their outcomes for citizens. And so he’s just had an enormous impact, not only here but abroad.

    Knowledge@Wharton: Thaler won the Nobel Prize in Economics for his work in behavioral economics, but as we were talking earlier you noted he considers himself a behavioral scientist. Can you talk about the distinctions there?

    Milkman: One of the things that is important about Richard Thaler’s work is that it bridges disciplines, and so while many economic Nobel Prizes are awarded to people who are truly only economists and only recognized in economics, some go to people who have impacted a far wider range of fields, and this is one of those.

    So Richard Thaler often refers to himself not only as a behavioral economist but as a behavioral scientist, because there’s a community that includes many who aren’t economists who are doing this work that is spurred by his ideas, his thinking about peculiarities of human behavior that aren’t captured by economic science.

    So behavioral science is a broader term. It includes psychologists, many folks in business schools who don’t have an identity as a psychologist or an economist. You can find the stray neuroscientists and sociologists who think of themselves as behavioral scientists as well.

    Knowledge@Wharton: It’s interesting that there’s the word “behavioral” in here, and “psychology.” I don’t hear the word “emotion,” when it would appear that that is part of it all. We talk about emotional intelligence — is that somehow connected to this idea? That also seems to be an area that is slightly outside of the strictly rational, and it applies to behavior, and it is talked about oftentimes in the work setting.

    Milkman: That’s a great question. I think that emotions specifically haven’t been exactly the center of Richard’s work, but at some level they are an underpinning of all behavioral science, and all of behavioral economics, because if you fundamentally ask where do these deviations from optimal decision making come from, many are driven by emotions.

    So a lot of Richard’s work looking at social preferences — for instance, the fact that we intrinsically seem to care about other people’s outcomes and not only our own — is fundamentally the result of emotion. We emotionally care about other people; we have an emotional reaction when we see something happening that we think is unfair to someone else.
    “The very first Nudge Unit came in the U.K. under David Cameron, and it was literally referred to as the Nudge Unit.”
    You can also think about an emotional reaction, or a visceral reaction leading to impulse control problems in many situations, and his work on self-control then is all about emotions.  So while he doesn’t typically get recognized for being a scholar of emotions, at some level everything we have learned about limited rationality is somehow connected to emotions it seems.

    Knowledge@Wharton: So tell me some of the ways that he has influenced many other researchers, including yourself.

    Milkman: Well he opened up new fields of inquiry that really weren’t in existence before he began doing this work. I personally study self-control and nudging, and those are two things that were not really being studied by the community of behavioral scientists in nearly the same way, not with the same lens, before he came along and made them central to behavioral economics and created this field, along with his predecessor, Daniel Kahneman, who was also a Nobel laureate roughly 15 years ago. Thaler has been instrumental in opening up doors for young scientists to think about things that previously weren’t talked about by rigorous academics.

    Knowledge@Wharton: What are some of the things you are looking at that you might not have looked at if you hadn’t had that influence in your life?

    Milkman: Well one of my areas is looking at something I call the Fresh Start Effect. We’ve done research showing that at the beginning of new cycles in our lives, like the start of a new year would be a very obvious one to think about, but also the start of a new week, following birthdays, we have renewed self-control and extra motivation to pursue our goals.

    And we find that people visit the gym at a higher rate at the beginning of these new cycles, for instance, and they’re more likely to search the term “diet” on Google at the start of these new cycles, and they’re more likely to create goal contracts on goal-setting websites. And that draws directly on Richard Thaler’s work, pointing out that we don’t treat time and money as if it is simply all the same and fungible; we actually use what he calls “mental accounts.”

    So we think of time as having these categories, or money as having these categories, and we don’t move money around between the categories — or move time around. So a new year is a new account, it’s a new category, and we treat it differently. When we have that new year, in my work we show that it feels like a fresh start — we feel like all our failings from last year, that’s a separate category, it’s behind us.

    And Richard has used this mental accounting theory to explain lots of anomalies in the way people engage with their personal finances among other things. So that’s an example of something that influenced my work.

    Knowledge@Wharton: Regarding Thaler’s work, I read that, for example, if you create something called a heating account in your personal budget, you end up spending more on heating. How does one influence the other?

    Milkman: The idea is that we treat money as if it is labeled. So say you get a gift certificate — this is the study I actually did in graduate school — to use at the grocery store where you shop for groceries every week. Say it’s for $10. Well you’re just $10 richer overall in all of life, right, because you were going to spend at least $10 at the grocery store next week anyway, since you go there every week.
    But because you label money, instead of feeling like, “Oh, I have $10 dollars for whatever I want this week; I can go to the movies or out for lunch an extra time,” we feel like that money is labeled for groceries and we act richer in our grocery account. We actually go splurge and buy things like seafood that we wouldn’t normally buy instead of just buying whatever extra thing would make us happier in life.

    So it’s a labeling phenomenon, when money comes in in one place, we think of it as only usable in that one place in spite of the fact that traditional economics would say we should recognize all money as totally fungible. It’s just another $10 in your pocket.

    Knowledge@Wharton: What haven’t I asked you about Richard Thaler that would be important for people to understand?

    Milkman: I think one of the most amazing things about Richard is how well he writes, and how simple his insights about human behavior are, and easy for anyone to appreciate. He’s actually the first scholar of behavioral economics whom I read when I was a graduate student actually studying computer science and business. I picked up a wonderful collection of his essays in a book called The Winner’s Curse about anomalies and the way that economic agents behave.

    I was immediately captivated because it was so incredibly simple and elegant, and funny and true, and I think many of the scholars who have been influenced by him wouldn’t have been as influenced if it weren’t for his incredible ability to communicate in that way. So for anyone listening and anyone thinking about being either a scholar or a communicator in other ways, it just emphasizes the importance of clear, simple writing, and clear, simple examples to have a huge impact on the world.

    Knowledge@Wharton: Is there any other kind of theory, or set of theories or ideas, out there that is emerging — that people are thinking about — that could be parallel to behavioral economics and that probably will turn out to be important, but people just don’t get it yet?

    Milkman: Well, one of Richard Thaler’s disciples — and his disciples are all incredibly impressive in their own right — is Sendhil Mullainathan, an economist at Harvard who thinks the next big thing is how machine learning will change social science. And I think he’s on to something; I think that could be the next revolution in the social sciences — using machine learning to better predict everything.

    Knowledge@Wharton: So we’re heading to a future of algorithms, I guess.

    Milkman: Well, certainly a future where algorithms do more to help social science.

    View at the original source

    0 0
  • 10/13/17--03:27: A zero-index waveguide 10-13



  • A zero-index waveguide stretches a wave of light infinitely long, creating a constant phase throughout the wire.(Image courtesy of Second Bay Studios/Harvard SEAS)


    In 2015, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) developed the first on-chip metamaterial with a refractive index of zero, meaning that the phase of light could be stretched infinitely long. The metamaterial represented a new method to manipulate light and was an important step forward for integrated photonic circuits, which use light rather than electrons to perform a wide variety of functions.

    Now, SEAS researchers have pushed that technology further – developing a zero-index waveguide compatible with current silicon photonic technologies. In doing so, the team observed a physical phenomenon that is usually unobservable — a standing wave of light.

    The research is published in ACS Photonics. The Harvard Office of Technology Development has filed a patent application and is exploring commercialization opportunities.
    “We were able to observe a breath-taking demonstration of an index of zero."
    When a wavelength of light moves through a material, its crests and troughs get condensed or stretched, depending on the properties of the material. How much the crests of a light wave are condensed is expressed as a ratio called the refractive index — the higher the index, the more squished the wavelength.

    When the refractive index is reduced to zero the light no longer behaves as a moving wave, traveling through space in a series of crests and troughs, otherwise known as phases. Instead, the wave is stretched infinitely long, creating a constant phase.  The phase oscillates only as a variable of time, not space.

    This is exciting for integrated photonics because most optical devices use interactions between two or more waves, which need to propagate in sync as they move through the circuit. If the wavelength is infinitely long, matching the phase of the wavelengths of light isn’t an issue, since the optical fields are the same everywhere.

    But after the initial 2015 breakthrough, the research team ran into a catch-22. Because the team used prisms to test whether light on the chip was indeed infinitely stretched, all of the devices were built in the shape of a prism. But prisms aren’t particularly useful shapes for integrated circuits. The team wanted to develop a device that could plug directly into existing photonic circuits and for that, the most useful shape is a straight wire or waveguide.

    The researchers — led by Eric Mazur, the Balkanski Professor of Physics — built a waveguide but, without the help of a prism, had no easy way to prove if it had a refractive index of zero.
    Then, postdoctoral fellows Orad Reshef and Philip Camayd-Muñoz had an idea.

    Usually, a wavelength of light is too small and oscillates too quickly to measure anything but an average. The only way to actually see a wavelength is to combine two waves to create interference.

    Imagine strings on a guitar, pinned on either side. When a string is plucked, the wave travels through the string, hits the pin on the other side and gets reflected back — creating two waves moving in opposite directions with the same frequency. This kind of interference is called a standing wave.

    Reshef and Camayd-Muñoz applied the same idea to the light in the waveguide. They “pinned-down” the light by shining beams in opposite directions through the device to create a standing wave.

    The individual waves were still oscillating quickly but they were oscillating at the same frequency in opposite directions, meaning at certain points they canceled each other out and other points they added together, creating an all light or all dark pattern. And, because of the zero-index material, the team was able to stretch the wavelength large enough to see.

    This may be the first time a standing wave with infinitely-long wavelengths has ever been seen.


    Real-time, unprocessed video of standing waves of light in a 15-micrometer-long, zero-index waveguide taken with an infrared camera. The perceived motion is caused by atmospheric disturbances to the free- standing fibers that couple light onto the chip, changing the relative phase between the two incoming beams. Credit: Harvard SEAS



    “We were able to observe a breath-taking demonstration of an index of zero,” said Reshef, who recently accepted a position at the University of Ottawa. “By propagating through a medium with such a low index, these wave features, which in light are typically too small to detect directly, are expanded so you can see them with an ordinary microscope.”

    “This adds an important tool to the silicon photonics toolbox,” said Camayd-Muñoz. “There's exotic physics in the zero-index regime, and now we're bringing that to integrated photonics. That's an important step, because it means we can plug directly into conventional optical devices, and find real uses for zero-index phenomena. In the future, quantum computers may be based on networks of excited atoms that communicate via photons. The interaction range of the atoms is roughly equal to the wavelength of light. By making the wavelength large, we can enable long-range interactions to scale up quantum devices.”

    The paper was co-authored by Daryl I. Vulis, Yang Li and Marko Loncar, Tiantsai Lin Professor of Electrical Engineering at SEAS. The research was supported by National Science Foundation and was performed in part at the Center for Nanoscale Systems (CNS).

    View at the original source

    0 0



    Image credit : Shyam's Imagination Library


    Crying is a natural response humans have to a range of emotions, including sadness, grief, joy, and
    frustration. But does crying have any health benefits?

    It is not unusual to cry, and both sexes cry more than people may assume. In the United States, women cry an average of 3.5 times per month and men cry an average of 1.9 times a month.
    This article explores why people cry and what health benefits crying may have.
    Crying is a natural response to emotions or irritants like dust in the eyes.
    Humans produce three types of tears:
    • Basal: The tear ducts constantly secrete basal tears, which are a protein-rich antibacterial liquid that help to keep the eyes moist every time a person blinks.
    • Reflex: These are tears triggered by irritants such as wind, smoke, or onions. They are released to flush out these irritants and protect the eye.
    • Emotional: Humans shed tears in response to a range of emotions. These tears contain a higher level of stress hormones than other types of tears.
    When people talk about crying, they are usually referring to emotional tears.

    Benefits of crying

    People may try to suppress tears if they see them as a sign of weakness, but science suggests that doing so could mean missing out on a range of benefits. Researchers have found that crying:

    1. Has a soothing effect

    Self-soothing is when people:
    • regulate their own emotions
    • calm themselves
    • reduce their own distress
    A 2014 study found that crying may have a direct, self-soothing effect on people. The study explained how crying activates the parasympathetic nervous system (PNS), which helps people relax.

    2. Gets support from others

    As well as helping people self-soothe, crying can help people get support from others around them.
    As this 2016 study explains, crying is primarily an attachment behavior, as it rallies support from the people around us. This is known as an interpersonal or social benefit.

    3. Helps to relieve pain

    Research has found that in addition to being self-soothing, shedding emotional tears releases oxytocin and endorphins.
    These chemicals make people feel good and may also ease both physical and emotional pain. In this way, crying can help reduce pain and promote a sense of well-being.

    4. Enhances mood

    Crying may help lift people's spirits and make them feel better. As well as relieving pain, oxytocin and endorphins can help improve mood. This is why they are often known as "feel good" chemicals.

    5. Releases toxins and relieves stress

    When humans cry in response to stress, their tears contain a number of stress hormones and other chemicals.
    Researchers believe that crying could reduce the levels of these chemicals in the body, which could, in turn, reduce stress. More research is needed into this area, however, to confirm this.

    6. Aids sleep

    A small study in 2015 found that crying can help babies sleep better. Whether crying has the same sleep-enhancing effect on adults is yet to be researched.
    However, it follows that the calming, mood-enhancing, and pain-relieving effects of crying above may help a person fall asleep more easily.

    7. Fights bacteria

    Crying helps to kill bacteria and keep the eyes clean as tears contain a fluid called lysozyme.
    A 2011 study found that lysozyme had such powerful antimicrobial properties that it could even help to reduce risks presented by bioterror agents, such as anthrax.

    8. Improves vision

    Basal tears, which are released every time a person blinks, help to keep the eyes moist and prevent mucous membranes from drying out.

    As the National Eye Institute explains, the lubricating effect of basal tears helps people to see more clearly. When the membranes dry out, vision can become blurry.

    When to see a doctor

    Crying has a number of health benefits, but frequent crying may be a sign of depression.

    Crying in response to emotions such as sadness, joy, or frustration is normal and has a number of health benefits.

    However, sometimes frequent crying can be a sign of depression. People may be depressed if their crying:
    • happens very frequently
    • happens for no apparent reason
    • starts to affect daily activities
    • becomes uncontrollable
    Other signs of depression include:
    • having trouble concentrating, remembering things, or making decisions
    • feeling fatigued or without energy
    • feeling guilty, worthless, or helpless
    • feeling pessimistic or hopeless
    • having trouble sleeping or sleeping too much
    • feeling irritable or restless
    • not enjoying things that were once pleasurable
    • overeating or undereating
    • unexplained aches, pains, or cramps
    • digestive problems that do not improve with treatment
    • persistent anxiety
    • suicidal thoughts or thoughts of self-harm
    If a person is experiencing symptoms of depression, or someone they know is, then they should talk to a doctor.
    Should a person feel suicidal, or know someone who is feeling that way, they should call:
    • emergency services 

    Takeaway

    Crying is a normal human response to a whole range of emotions that has a number of health and social benefits, including pain relief and self-soothing effects.

    However, if crying happens frequently, uncontrollably, or for no reason, it could be a sign of depression. If this is the case, it is a good idea to speak to a doctor. 


    View at the original source 

    0 0




    A Fatigue Cost Calculator reveals that a U.S. employer with 1,000 workers can lose about $1.4 million dollars annually due to costs associated with exhausted workers.


    Sleep disorders and sleep deficiency are hidden costs that affect employers across the U.S. Seventy percent of Americans admit that they routinely get insufficient sleep, and 30 percent of U.S. workers and 44 percent of night-shift workers report sleeping less than six hours a night. In addition, an estimated 50 million–70 million people have a sleep disorder, often undiagnosed. In total, the costs attributable to sleep deficiency in the U.S. were estimated to exceed $410 billion in 2015, equivalent to 2.28 percent of the gross domestic product.

    Analysis of existing data, using a new Fatigue Cost Calculator developed through the Sleep Matters Initiative at Brigham Health for the National Safety Council (NSC), reveal that a U.S. employer with 1,000 workers can lose about $1.4 million dollars each year in absenteeism, diminished productivity, health care costs, accidents, and other occupational costs associated with exhausted employees, many of whom have undiagnosed and untreated sleep disorders.

    Introduced at the NSC Congress and Expo, the Fatigue Cost Calculator is free online. Employers can use it to determine how much money a tired workforce costs their business by entering specific data — including workforce size, industry, and location — to predict the prevalence of sleep deficiency and common sleep disorders among their employees. Using an algorithm generated by integrating information from sleep science literature and publicly available government data, the calculator can estimate both the prevalence of employee sleep deficiency and the resulting financial loss.

    It also estimates the savings that might be expected from implementation of a sleep health education program that includes screening for untreated sleep disorders, such as obstructive sleep apnea and insomnia.

    “We estimate that the costs of fatigue in an average-sized Fortune 500 company consisting of approximately 52,000 employees is about $80 million annually,” said Matthew Weaver, a scientist with the Brigham Health Sleep Matters Initiative who helped develop the calculator.

    The mission of the Sleep Matters Initiative, led by investigators from Brigham Health and Harvard Medical School, is to improve treatment of sleep and circadian disorders in order to improve health, safety, and performance, and to promote change in social norms around sleep health.

    “Promotion of healthy sleep is a win-win for both employers and employees, enhancing quality of life and longevity for workers while improving productivity and reducing health care costs for employers,” said Charles A. Czeisler, director of the Division of Sleep and Circadian Disorders at Brigham and Women’s and Baldino Professor of Sleep Medicine at Harvard Medical School.

    “Additionally, occupational fatigue-management programs can increase knowledge of sleep disorders, educate participants on the impact of reduced alertness due to sleep deficiency, and teach fatigue countermeasures, as well as screen for untreated sleep disorders.”
    Other findings revealed by the Fatigue Cost Calculator include:
    • A national transportation company with 1,000 employees likely loses more than $600,000 a year because of tired employees. Motor vehicle crashes are the leading cause of workplace deaths, underscoring the need for alert, attentive employees.
    • More than 250 employees at a 1,000-worker national construction company likely have sleep disorders, which increase the risk of being injured or killed on the job. The construction industry has the highest number of on-the-job deaths each year.
    • A single employee with obstructive sleep apnea can cost an employer more than $3,000 a year in excess health care costs.
    • An employee with untreated insomnia is present but not productive for more than 10 full days of work annually, and accounts for at least $2,000 in excess health care costs.
    • An average Fortune 500 company could save nearly $40 million a year if half of its workforce engaged in a sleep-health program.
    “This research reinforces that sleepless nights hurt everyone,” said Deborah A.P. Hersman, president and CEO off the National Safety Council. “Many of us have been conditioned to just power through our fatigue, but worker health and safety on the job are compromised when we don’t get the sleep we need. The calculator demonstrates that doing nothing to address fatigue costs employers a lot more than they think.”

    Development of the Fatigue Cost Calculator was supported by a contract from the National Safety Council to the Brigham and Women’s Physicians Organization. 


    0 0




























    Smokestacks belching hazardous gases, rivers so polluted they catch fire, workers in identical overalls turning bolts with wrenches: For many Americans, the word “manufacturing” conjures up negative, old-fashioned images. Or, we think of it as something that takes place in less-developed nations, as has increasingly been the case. Many have said that factories will continue to locate wherever the work can be done most cheaply, despite political messaging about bringing back manufacturing jobs.

    Manufacturing accounts for about 13% of the U.S. economy. Should we even focus on trying to “bring it back,” now that information and services — the “knowledge economy” — seems a more promising path? Andrew Liveris firmly believes we should. In fact, he said in a recent talk at the University of Pennsylvania that manufacturing is essential to our knowledge economy, and to America’s competitiveness on the global stage.

    Liveris is the executive chairman of DowDuPont, a $73 billion holding company (the two giant chemical companies merged in September), and Chairman and CEO of The Dow Chemical Company. He has advised both the Obama and Trump administrations on manufacturing issues. (Liveris was head of Trump’s now-defunct American Manufacturing Council.)

    The author of Make It in America: The Case for Reinventing the Economy, in which he writes that America’s economic growth and prosperity depends upon a strong manufacturing sector, Liveris was interviewed at Penn’s Perry World House during Penn Global Week by Wharton School Dean Geoffrey Garrett. Garrett referred to Liveris “the cheerleader of advanced manufacturing.”

    A Key Difference

    Garrett stated that President Trump has been talking about bringing U.S. economic growth back up to the level it was before the 2008 Great Recession. Since World War II the economy has typically increased about 4% a year, but in 2016, the economy grew just 1.6%. What would it take to see those higher numbers again, he asked?

    Liveris commented that the very nature of growth has changed dramatically because human civilization is going through “one of its every-few-hundred-years massive tipping point,” due to digitization. He said this phenomenon was as disruptive as Ford’s introduction of mass-produced cars in the horse-and-carriage era. This tipping point is causing enormous dislocation, including the elimination of jobs and the loss of meaningful work. Moreover, he said, “the job of 20 or 30 years ago is paying less — wage rates are down and all of that — so there are a lot of unhappy and angry people out there.”

    And America is under-prepared, including from a policy point of view, he said. Liveris talked about the profound implications for business leaders as the forces of globalization collide with the forces of digitization. He said most corporations are not yet nimble enough to re-design themselves to accommodate these trends.

    Yet, he said, substantial economic expansion is possible. “In the immediate term, can we get 3.5% growth in this country? You bet we can,” said Liveris. He noted that instituting policies to spur foreign direct investment would help as they did in the Clinton and Reagan eras. He also cited tax reform, infrastructure spending and business deregulation as important factors. He added, though, that the U.S. currently has “a massive, massive issue in how our government is functioning,” so change is not likely to happen overnight.

    According to Liveris, there is a widespread lack of understanding among the public of what today’s manufacturing — which he referred to as advanced manufacturing — actually consists of. (Definitions vary, but the OECD defines advanced manufacturing technology as computer-controlled or micro-electronics-based equipment used to make products.) Liveris stated, “We are generating a new wave of technology to generate a knowledge economy. And a knowledge economy will need things made. They’ll just be made differently.”

    Advanced manufacturing might include making smartphones, solar cells for roofs, batteries for hybrid cars, or innovative wind turbines. Liveris said he had visited a DowDuPont factory the previous week that is working on advanced compasses to enable wind turbines with blades the size of football fields. The goal is to produce blades light and efficient enough to make wind power a viable reality. “That’s technology. That’s advanced manufacturing,” he said.
    “In the immediate term, can we get 3.5% growth in this country? You bet we can.”
    He asked the audience to envision “a knowledge economy based on the collision and intersection of the sciences.” Those who think the tech revolution is only about “the Facebooks and the Googles, connectivity and all that,” are dead wrong, he said.

    Not Enough Work, or Not Enough Workers?

    Doesn’t the use of more robotics and automation lead to job loss? Garrett asked. Or, is the problem that workers aren’t appropriately skilled to fill new kinds of jobs? Liveris said he was firmly in the second camp. “I have job openings now at Dow and at DuPont that I can’t get the skills for. And engineering jobs open.”

    He elaborated that the way machines provide insights is changing, and noted, “We humans will have to read those insights. I can’t [find] enough of those humans. That’s the issue we’re dealing with in this country.”

    Liveris said that 7.5 million technology jobs left America between 2008 and 2016 because the country wasn’t supplying appropriate candidates. The reaction of many businesses was to re-locate to “the Chinas, the Indias, the places that were supplying that sort of skill.” In the United States right now, he said, there are half a million technology jobs open, but American educational institutions are only graduating roughly between 50,000 and 70,000 candidates per year, so there’s a “massive under-supply.” In the next three years, there will be 3.5 million jobs created, and Liveris said the U.S. might only be able to fill about 1.5 million of them through a combination of graduation and immigration. “Unless immigration is fooled with, which is a whole other issue.”

    According to Liveris, a critical reason for America to revive its manufacturing sector is to promote innovation. “Something that we at Dow and many of us in manufacturing know: If you have the shop floor, if you make things, you have the prototype for the next thing, so you can innovate.” Conversely, if you stop making those things, your R&D diminishes dramatically, he said.
    Liveris said that 7.5 million technology jobs left America between 2008 and 2016 because the country wasn’t supplying appropriate candidates.
    The U.S. should be incentivizing the technologies that America is good at, said Liveris. Everybody knows about Silicon Valley, Liveris noted, but fewer know that the U.S. is prominent in advanced sensors, which are critical to the progress of the Internet of Things (IoT) sector. Other areas in which America stands out are lightweight compasses and 3D printing. He noted that technologies like these have been developed at various institutions “in a somewhat haphazard way, which is very American. That’s great. That’s creativity.” But, he said, shouldn’t we as a country double down on the things we do best and become the world leader?

    Liveris called advanced manufacturing “the best path for the United States” and said, “We’re so naturally suited for it if we’d just get the policies to help us.” He believes that the U.S. should already be at the most advanced layer of economic development based on technology. “We have cheap money, we’ve got skills, we’ve got low-cost energy. We should be having an investment boom in this country,” he said.

    He noted, though, that we have created barriers to investment that are preventing this from happening. Borrowing an expression from Indian Prime Minister Narendra Modi, Liveris said that there were two kinds of countries in the world: red tape countries (hampered by bureaucracy and over-regulation) and red carpet countries (welcoming to investors). The U.S. has unfortunately become a red tape country, he said.

    He called investment “the biggest job creator out there,” and stated that Germany for example has figured out how to do this. “It’s the poster child of investment in Europe.” China, too, has mastered it, and “other countries who want to trade with the United States are mastering it because they incentivize it.”
    “If you have the shop floor, if you make things, you have the prototype for the next thing, so you can innovate.”
    Closing America’s Education Gap

    A big proponent of STEM education, Liveris said that American schools are not graduating the workers we need. “We have convinced ourselves that a four-year college degree of the skills we used to have in the last century is what we should still keep producing.” He said that re-tooling American education needs to happen immediately, with STEM education incorporated at every level including elementary school.

    Liveris said DowDuPont is funding a STEM-dedicated school, in conjunction with Michigan State University, in Dow Chemical’s home base of Midland, Michigan. The school will offer curricula for kindergarten through 12th grade, with MSU course offerings for college students, according to the Michigan news site MLive.

    The pilot school will also provide teacher enrichment programs. Liveris said that American teachers need to be better trained and rewarded. “We do something very bad in this country, which is we don’t celebrate teachers at the elementary, middle and high school level. We should be putting them on pedestals. And giving them the skills to teach STEM.”

    View at the original source

    0 0








    Image: Shyam's Imagination Library

    Happiness is in short supply at work these days. Deadlines, staff shortages, productivity pressures and crazy stress push even the most talented and temperate people to want to quit their jobs. But that’s not a realistic option, even for folks in the C-suite. Annie McKee, director of the Penn CLO and Medical Education programs at the University of Pennsylvania where she teaches leadership and emotional intelligence, has a better idea. In her book, How To Be Happy At Work, she outlines three requirements that workers need to feel more fulfilled on the job. McKee spoke about the concepts in her book on the Knowledge@Wharton show on SiriusXM channel 111. (Listen to the podcast at the top of this page.)

     The following is an edited transcript of the conversation.

    Knowledge@Wharton: How many people do you think are not happy at work?

    Annie McKee: I don’t think we even have to guess. Gallup has been studying people for years, and upwards of two-thirds of us are either neutral, which means we don’t care, or we’re actively disengaged. Disengagement and happiness go hand in hand, so an awful lot of people are not happy at work. Unhappy people don’t perform as well as they could. When we’re negative, cynical, pessimistic, we simply don’t give our all, and our brains don’t work that well just when we need people’s brains to be working beautifully.

    Knowledge@Wharton: Has this problem ramped up in the last two decades or so? As much as digital is phenomenal for us, a lot of people feel under pressure because of what digital does to accelerate change.

    McKee: The world is changing at a rapid pace, obviously. As much as we love our always-connected world, it can mean that we work all of the time. We’re always one minute away from that next email that’s going to bring tragedy or crisis to our working lives. Some of us never turn it off, and that’s not good for us.

    Knowledge@Wharton: Where did your idea for the book come from?

    McKee: I’ve worked in organizations all over the world for decades now. I’ve looked at leadership practices, emotional intelligence, culture and all of those things that impact the bottom line and people’s individual effectiveness. I decided to take another look and see what people were trying to tell us. All of these studies that we did around the world were practical studies. People were telling us, “I want to be happy, I want to be fulfilled, I want to love my job, I’m not as happy or as fulfilled as I could be, and here is what I need.” And then they went on to tell us what they need.

    Knowledge@Wharton: Are executives aware of their employees’ problems? Are they also aware that they may susceptible to this?
    “Unhappy people don’t perform as well as they could.”
    McKee: It doesn’t matter where you sit in the organization, you are susceptible to disengagement and unhappiness even at the very top. We think if you’re making all of that money and you’ve got all of that power and that great job, it’s going to be perfect. The best leaders in our organizations, at the very top and all the way down to the shop floor, understand that people matter, feelings matter, and it’s job number one to create a climate where people feel good about what they’re doing where they’re happy, engaged and ready to share their talents.

    Knowledge@Wharton: What are the key ingredients to finding that happiness?

    McKee: From my work, I’ve discovered three things. Number one, people feel that they need to have impact on something that is important to them, whether it’s people or a cause or the bottom line. They need to feel that their work is purposeful, and it’s tied to values that they care about.
    Number two, we need to feel optimistic that our work is tied to a personal vision of the future. The organization’s vision isn’t enough. As good as it may be, we have to know that what we’re doing ties to a personal vision of our future.

    Number three, we need friends at work. We’ve learned over the course of our lives you shouldn’t be friends with people at work, that it’s dangerous somehow, that it will cloud your judgment. I don’t agree. I think we need to feel that we are with our tribe in the workplace, that we belong, that we’re with people that we respect and who respect us in return. We need warmth, we need caring, and we need to feel supported.

    Knowledge@Wharton: I would think most people looking for a job, whether they are coming out of college or shifting careers mid-life, are looking for that area that would make them happy. When you have that expectation of being in the right sector to begin with, you hope that you have the happiness to go along with it.

    McKee: We do hope that we get into the right organization and there’s a good fit between our values and the organization’s values. We really try hard. But we get in there and the pressures of everyday life, and the crises and the stress can really tamp down our enthusiasm and our happiness.

    Also, a lot of us are susceptible to what I call happiness traps. We end up doing what we think we should do. We take that job with that fancy consulting firm or that wonderful organization not because we love it and not because it’s a fit, but because we think we should. Frankly, some of us have ambition that goes into overdrive. Ambition is a great thing, until it’s not.

    Knowledge@Wharton: Is that part of the reason why we see more people who have been with a company for 20 years, 25 years and suddenly pivot? They may be going to work for a nonprofit. You see these stories popping up, especially with people in the C-suite.

    McKee: You do see that. You see senior leaders all of a sudden saying, “Enough is enough, I [want to do] something different.” But I really want to be clear, you don’t always have to run away. In fact, you want to run towards something. If you feel you’re not happy in the workplace, quitting your job is probably not the first answer, and some of us can’t. What we need to do is figure out what we need, what we want, how to have impact, what will make us feel hopeful about our future, what kind of people we want to work with and for, and then go find that either in our organization or elsewhere.

    Happiness starts inside each of us. It’s tempting to blame that toxic boss or that horrible organizational culture, and those things may be true. But if you want to be happy at work, you first have to look inside and ask what is it that you want? What will make you feel fulfilled? Which happiness traps have you fallen prey to? And get yourself out.

    Knowledge@Wharton: What are the happiness traps?

    McKee: There’s what I call the “should” trap. We do what we think we should do. We show up to work acting like someone we’re not. That is soul-destroying, and it’s fairly common. [There’s also] the “ambition” trap. When our ambition drives us from goal to goal and we don’t even stop to celebrate the accomplishment of those goals, something is wrong.

    Some of us feel helpless, stuck. The “helplessness” trap may be the most serious of all. It’s really hard to get out of because we don’t feel we have any power. My message is we have a lot more power and control over not only our attitude but what we do and how we approach our work on a daily basis and in the long term than maybe we think we do.
    “Ambition is a great thing, until it’s not.”
    Knowledge@Wharton: Earlier in your life, you found yourself fitting into these patterns as well.
    McKee: I did. Early in my life I wasn’t teaching in a wonderful institution like Penn. I didn’t even have what you would call a professional career. I had jobs like waiting tables and cleaning houses and taking care of elderly people. I was making ends meet. And it wasn’t easy.

    I had two choices, I could either say to myself this is miserable and I hate it, or I could look for something that was fulfilling in what I did. I tried to do that. I did find aspects of my job, whether it was cleaning houses and feeling like I was doing a good job or finding a mentor in some of these workplaces, that really made it worthwhile to me.

    Knowledge@Wharton: Do you have to be 100% happy all of the time? I think if you can find areas of happiness, it can make your job or your life so much easier to go through.
    McKee: Happiness isn’t just about feeling good every moment of the day, and it’s not just about pleasure. That’s hedonism, and we’re not seeking that. Frankly, a little bit of stress is a good thing. It pushes us to be innovative and to do things differently and to push harder. So, it’s not about just feeling good. But we do need a foundation of purpose, hope and friendships. We do need to know that what we do matters at work, that we are doing something that is tied to our future, and that the people we work with are great.

    Knowledge@Wharton: You mentioned taking the time to recognize your accomplishments, but there are companies that want you to push on to the next project. They don’t give you the opportunity to slow down even for an hour to enjoy it.

    McKee: Most of our organizations are really hard-driving, especially publicly traded organizations. I’m not even sure they’re that different than other institutions these days. The pressure is on everywhere, and the reality is we do move from project to project, goal to goal. What choices can we make in the middle of that culture? We don’t have to be victims of our organizational culture, and we don’t have to be victims of that bad boss you might have or maybe you’ve had in the past. We can make choices about what we do with our time, our energy and our emotional stance.

    Knowledge@Wharton: Going back to the friends component in the workplace, does it matter where those friends come from within the structure of the company? A lot of people say you have to be careful if you want to try to be friends with the boss.

    McKee: It doesn’t matter where your friends are, but it does matter whether or not you have your eyes open and recognize what people are thinking about how you are behaving and who you are friends with. You’ve got to be aware of your organization’s culture and the rules of the road.
    If you’re violating some of those rules — for example, going up the hierarchy and building friendships with people who are a couple levels above you or maybe in another division — you need to understand what the implications of that are. And you need to be maybe a little bit careful.

    Knowledge@Wharton: How does the middle manager deal with this?

    McKee: Middle managers get it from all sides. They are pulled in every direction, and it is probably the hardest job in any organization. They, more than anybody, need to hear this message. Life is too short to be unhappy at work. Middle managers have a tremendous impact on the people who work for them, and recognizing that you more than anybody are the creator and the curator of the culture in the organization is an important place to start.

    Knowledge@Wharton: Sometimes managers forget about the life people have outside of work.
    McKee: We’re here at the Wharton School, and we’ve been studying management now for over 100 years. Some of the early approaches to managing organizations are really destructive, and one of the aspects of that early research has been the attitude that people don’t matter and that private lives ought to be left at the door of the office. It’s impossible to leave our private lives at the door of the office. It doesn’t mean that we talk about it all of the time, but we bring our experiences with us and we bring our feelings with us. Managers need to recognize that.

    It’s also hard to find what is commonly called work-life balance. By the way, I don’t like that phrase. I think it’s a myth. I don’t think there is any magic formula that says if we get it just right we’re going to be happy at work and happy at home. It’s more about understanding that the lines are blurred between work and home now, and we need to learn how to manage our choices and our attention.

    Knowledge@Wharton: What about those who work remotely and can feel very isolated and disconnected?

    McKee: I understand the isolation and feeling kind of left out. The reality is that it takes a lot more effort to build relationships when we work remotely. We need to take time. When we’re working remotely, we get on the phone, we do the work that needs to be done, we talk about the project, and we get off the phone. That leaves us feeling kind of empty. We need to take that extra five minutes to have a chat, have a laugh, feel like we are in a relationship with somebody. It takes effort and self-management because the temptation is to just do the work. You talk about the gig economy, right? We’re all sort of working in a portfolio manner these days. We take on this bit of work and that bit of work, and much of it is virtual.
    “Life is too short to be unhappy at work.”
    I think we need to figure this out because the bottom line is that we have not changed as human beings. We still need to feel like we belong, we need to feel that we’re cared for, and we need to be able to care for others in return. If we’re working far away, we’ve got to take extra time and make a concerted effort to build those relationships in a different kind of way than if we’re in person.
    I’m a big proponent of working from home or working remotely. I think it’s really helpful to individuals and companies. People who are able to work at home feel trusted, and when you feel trusted you are more committed to your organization. A lot of people report being able to get more done away from the office because you don’t have the interruptions. The downside is that you have to find a way to keep the relationships fresh and alive because that’s as important as getting that project done.

    Knowledge@Wharton: Companies seem to be more aware of employee happiness than they used to be, which is a good thing. Do you think we’re going to continue down that path?

    McKee: Companies are more aware, so are enlightened CEOs and enlightened leaders. I think we will continue down the path for the following reasons. It’s not just nice-to-have, and it’s not just about feeling good. We’ve got solid research coming out of positive psychology, neuroscience and management that tells us that feelings matter. When we feel good, we’re smarter. And we need smart employees now. We need people who are committed, who are engaged. The research is pretty clear. Happiness before success. If we want our employees to be at their best, we need to care about their emotional well-being as well as their physical well-being.

    View it at the original source


    0 0





    Does the human ability to innovate suggest an immunity to total extinction?

    Yes and no. Currently, innovation reduces our chance of extinction in some ways, and increases it in others. But if we innovate cleverly, we could become just about immune to extinction.


    The species that survive mass extinctions tend to share three characteristics.They're widespread. This means local disasters don't wipe out the entire species, and some small areas, called refugia, tend to be unaffected by global disasters. If you're widespread, it's more likely that you have a population that happens to live in a refugium. 

    They're ecological generalists. They can cope with widely varying physical conditions, and they're not fussy about food.

    They're r-selected. This means that they breed fast and have short generation times, which allows them to rapidly grow their populations and adapt genetically to new conditions.

    Innovation gives humans the ability to be widespread ecological generalists. With technology, we can live in more diverse conditions and places than any other species. And while we can't (currently) grow our populations rapidly like an r-selected species, innovation does allow us to adapt quickly at the cultural level.

    Technology also increases our connections to one another and connectivity is a two-edged sword. Many species consist of a network of small, local populations, each of which is somewhat isolated from the others. We call this a metapopulation. The local populations often go extinct, but they are later re-seeded by others, so the metapopulation as a whole survives. 

    Humans used to be a metapopulation, but thanks to innovation, we're now globally connected. Archaeologists believe that many past civilizations, such as the Easter Islanders, fell because of unsustainable ecological and cultural innovations. The impact of these disasters was limited because these civilizations were small and disconnected from other such civilizations.

    These days, a useful innovation can spread around the world in weeks. So can a lethal one. With many of the technologies and chemicals we're currently inventing, we can't be certain about their long-term effects; human biology is complex enough that we often can't be absolutely certain something won't kill us in a decade until we've waited a decade to see. We try to be careful and test things before they're released, and the probability that any particular invention could kill us all is tiny, but since we're constantly innovating, it's a real possibility.

    Pandemics pose the same problem for a well-connected species. There are certain possibilities where species extinction is really hard to avoid; fortunately, they're also very unlikely, but we are definitely not immune from this.

    The most likely cause of our extinction, in my opinion, is innovation in machine learning/AI. This could destroy the planet, but even if it doesn't, humans will be ultimately redundant to the dominant systems. They might keep us alive in a zoo somewhere, but I doubt it. A happier scenario (to me at least) is transhumanism, where humans become extinct in a sense because we've managed to liberate ourselves from biology.

    So how could innovation prevent our extinction? We seed the galaxy with independently evolving human populations to create a new metapopulation. These local populations would hopefully be sufficiently isolated that some would survive an innovation or disaster that wipes out the rest. They would, of course, evolve in response to local conditions, perhaps creating several new species. So you could say this is still extinction, but it's as close as we'll come to persistence in our ever-changing universe. 


    0 0



    Indian Oil Corporation Limited (IOCL) invites Application for the post of 45 Junior Engineering Assistant on contract basis at Mathura Refinery, Uttar Pradesh. Apply Online before 31 October 2017. Official website is iocl.com – Qualification/ eligibility conditions, how to apply & other rules are given below…

    Advt. No. : MR/HR/RECT/JEA(ALL INDIA)/2017

    IOCL Job Details :
    • Post Name : Junior Engineering Assistant
    • No of Vacancy : 45 Posts
    • Pay Scale : Rs. 11900-32000/-
    Discipline wise Vacancy : 
    1. Chemical : 15 Posts
    2. Electrical : 07 Posts
    3. Mechanical : 13 Posts
    4. Instrumentation : 09 Posts
    5. Fire & Safety : 01 Post
    Eligible Criteria for IOCL Recruitment :
    • Educational Qualification : 3 years Diploma in Electrical/Mechanical/Instrumentation/Instrumentation & Electronics / Instrumentation and Control Engineering from a recognized Institute/University OR 3 years Diploma in Chemical/Refinery & Petrochemical Engg. Or BSc (Maths, Physics, Chemistry or Industrial Chemistry) from a recognized Institute/University.
    • Age Limit : Minimum & Maximum age limit is 18 to 26 years as on 31.10.2017
    Job Location : Mathura (Uttar Pradesh)

    IOCL Selection Process : Selections will be based on Written Test and a Skill/Proficiency/Physical Test(SPPT).

    Application Fee : General and OBC candidates have to pay Rs.150/-  though Online mode using either Debit/Credit Card or through Net-Banking only. SC/ST/PwD/ExSM candidates are exempted from payment of application fee.

    How to Apply IOCL Vacancy : Interested candidates may apply Online through the website https://www.iocl.com form 09.10.2017 to 31.10.2017. Candidates may also send hard copy of Online application along with self attested copies of all supporting documents by ordinary post to DGM(HR), HR
    Dept, Administration Building, Mathura Refinery, Mathura, Uttar Pradesh-281005 on or before 07.11.2017.
    Important Dates to Remember :
    • Starting Date for Submission of Online Application : 09.10.2017
    • Last Date for Submission of Online Application : 31.10.2017
    • Last Date for Submission of Hard Copy of Online Application : 07.11.2017
    Important Links :

    0 0




    US officials have been given a stark warning about the potential dangers of a nuclear electromagnetic pulse (EMP) bomb triggered by reclusive North Korea .

    According to experts, such a blast could end up killing 90% of Americans indirectly by knocking out the power grid and all electrical devices within the blast radius.

    Dr. William R. Graham and Dr. Peter Vincent Pry from the EMP Commission outlined to the US House of Representatives the dangers faced by a detonation - which is when a hydrogen bomb is detonated at an altitude of between 30 and 400km above a target. Such a weapon would knock out things like refrigeration for food storage, electrical lights and communication and water processing.

    "With the development of small nuclear arsenals and long-range missiles by new, radical U.S. adversaries, beginning with North Korea, the threat of a nuclear EMP attack against the U.S. becomes one of the few ways that such a country could inflict devastating damage to the United States," the pair warned in a written statement .

    "It is critical, therefore, that the U.S. national leadership address the EMP threat as a critical and existential issue, and give a high priority to assuring the leadership is engaged and the necessary steps are taken to protect the country from EMP."

    Dr. Graham, a former science advisor to president Reagan and Dr. Pry, a former CIA officer, urged president Trump to prepare for a possible EMP strike.

    They also warned that North Korea's weaponry is becoming more of an issue as the reclusive nation continues to schedule ICBM missile tests.

    "The EMP Commission finds that even primitive, low-yield nuclear weapons are such a significant EMP threat that rogue states, like North Korea, or terrorists may well prefer using a nuclear weapon for EMP attack, instead of destroying a city."

    The higher an EMP bomb is detonated, the wider the range of destruction.

    At 400km (250 miles), an EMP bomb would be just under the orbit of the International Space Station and the resulting detonation would be enough to affect the majority of the US mainland.

    View at the original source 

    Please also on north Korea


    North Korea 'could kill almost four million people in Seoul and Tokyo with retaliatory nuclear attack' 

    “Creative diplomacy is vital to defuse Korean crisis”




    Time to Accept Reality and Manage a Nuclear-Armed North Korea  




    0 0





    Featured excerpt from WTF? What’s the Future and Why It’s Up to Us by Tim O’Reilly


    If you’re an entrepreneur or aspiring to become one, Tim O’Reilly is the kind of mentor you should try to enlist. He’s been there and done that in the New Economy since, well, pretty much since there’s been a New Economy.

    O’Reilly started writing technical manuals in the late 1970s, and by the early 1980s, he was publishing them, too. His company, O’Reilly Media Inc. (formerly O’Reilly R. Associates), based in Sebastopol, California, helped pioneer online publishing, and in the early 1990s, it launched the first web portal, Global Network Navigator, which AOL acquired in 1995.

    Since then, O’Reilly has been an active participant in a host of developments from open source to Gov 2.0 to the maker movement. He is founding partner of San Francisco-based O’Reilly AlphaTech Ventures LLC, an early stage venture investor, and he sits on a number of boards, including Code for America Labs Inc., PeerJ, Civis Analytics Inc., and Popvox Inc. He has also garnered a huge Twitter following @timoreilly.

    In his new book, WTF?, O’Reilly takes issue with the vogue for disruption. “The point of a disruptive technology is not the market or competitors that it destroys. It is the new markets and the new possibilities that it creates,” he writes. “I spend a lot of time urging Silicon Valley entrepreneurs to forget about disruption, and instead to work on stuff that matters.” In the following excerpt, edited for space, O’Reilly shares “four litmus tests” for figuring out what that means to you.

    1. Work on something that matters to you more than money.

    Remember that financial success is not the only goal or the only measure of achievement. It’s easy to get caught up in the heady buzz of making money. You should regard money as fuel for what you really want to do, not as a goal in and of itself.

    Whatever you do, think about what you really value. If you’re an entrepreneur, the time you spend thinking about your values will help you build a better company. If you’re going to work for someone else, the time you spend understanding your values will help you find the right kind of company or institution to work for, and when you find it, to do a better job.

    Don’t be afraid to think big. Business author Jim Collins said that great companies have “big hairy audacious goals.” Google’s motto, “access to all the world’s information,” is an example of such a goal.

    There’s a wonderful poem by Rainer Maria Rilke that retells the biblical story of Jacob wrestling with an angel, being defeated, but coming away stronger from the fight. It ends with an exhortation that goes something like this: “What we fight with is so small, and when we win, it makes us small. What we want is to be defeated, decisively, by successively greater beings.”

    The most successful companies treat success as a by-product of achieving their real goal, which is always something bigger and more important than they are. Former Google executive Jeff Huber is chasing this kind of bold dream of using technology to make transformative advances in health care. Jeff ’s wife died unexpectedly of an aggressive undetected cancer. After doing everything possible to save her and failing, he committed himself to making sure that no one else has that same experience. He has raised more than $100 million from investors in the quest to develop an early-detection blood test for cancer. That is the right way to use capital markets. Enriching investors, if it happens, will be a by-product of what he does, not his goal. He is harnessing all the power of money and technology to do something that today is impossible. The name of his company — Grail — is a conscious testament to the difficulty of the task. Jeff is wrestling with the angel.

    2. Create more value than you capture.

    It’s pretty easy to see that a financial fraud like Bernie Madoff wasn’t following this rule, and neither were the titans of Wall Street who ended up giving out billions of dollars in bonuses to themselves while wrecking the world economy. But most businesses that prosper do create value for their community and their customers as well as themselves, and the most successful businesses do so in part by creating a self-reinforcing value loop with and for others. They build or are part of a platform on which people who don’t work directly for them can build their own dreams.

    Investors as well as entrepreneurs must be focused on creating more value than they capture. A bank that loans money to a small business sees that business grow, perhaps borrow more money, hire employees who make deposits and take out loans, and so on. An investor who bets on the future of an unproven technology can do the same. The power of this cycle to lift people out of poverty has been demonstrated for centuries.

    If you’re succeeding at the goal of creating more value than you capture, you may sometimes find that others have made more of your ideas than you have yourself. It’s OK. I’ve had more than one billionaire (and an awful lot of start-ups who hope to follow in their footsteps) tell me how they got their start with a couple of O’Reilly books. I’ve had entrepreneurs tell me that they got the idea for their company from something I’ve said or written. That’s a good thing.

    Look around you: How many people do you employ in fulfilling jobs? How many customers use your products to make their own living? How many competitors have you enabled? How many people have you touched who gave you nothing back?

    3. Take the long view.

    The musician Brian Eno tells a story about the experience that led him to conceive of the ideas that led to the Long Now Foundation, a group that works to encourage long-term thinking. In 1978, Brian was invited to a rich acquaintance’s housewarming party, and as the neighborhood his cab drove through became dingier and dingier, he began to wonder if he was in the right place. “Finally [the driver] stopped at the doorway of a gloomy, unwelcoming industrial building,” he wrote. “Two winos were crumpled on the steps, oblivious. There was no other sign of life in the whole street.”
    But he was at the right address, and when he stepped out on the top floor, he discovered a multimillion-dollar palace.

    “I just didn’t understand,” he said. “Why would anyone spend so much money building a place like that in a neighborhood like this? Later I got into conversation with the hostess. ‘Do you like it here?’ I asked. ‘It’s the best place I’ve ever lived,’ she replied. ‘But I mean, you know, is it an interesting neighborhood?’ ‘Oh — the neighborhood? Well ... that’s outside!’ she laughed.”

    In the talk many years ago where I first heard him tell this story, Brian went on to describe the friend’s apartment, the space she controlled, as “the small here,” and the space outside, full of winos and derelicts, as “the big here.” He went on from there, along with others, to come up with the analogous concept of the Long Now. We need to think about the long now and the big here, or one day our society will enjoy neither.

    It’s very easy to make local optimizations, but they eventually catch up with you. Our economy has many elements of a Ponzi scheme. We borrow from other countries to finance our consumption, and we borrow from our children by saddling them with debt, using up nonrenewable resources, and failing to confront great challenges in income inequality, climate change, and global health.

    Every new company trying to invent the future has to think long-term. What happens to the suppliers whose profit margins are squeezed by Walmart or Amazon? Are the lower margins offset by higher sales or do the suppliers faced with lower margins eventually go out of business or lack the resources to come up with innovative new products? What happens to driver income when Uber or Lyft cuts prices for consumers in an attempt to displace competitors? Who will buy the products of companies that no longer pay workers to create them?

    It’s essential to get beyond the idea that the only goal of business is to make money for its shareholders. I’m a strong believer in the social value of business done right. We should aim to build an economy in which the important things are a natural outcome of the way we do business, paid for in self-sustaining ways rather than as charities to be funded out of the goodness of our hearts.
    Whether we work explicitly on causes and the public good, or work to improve our society by building a business, it’s important to think about the big picture, and what matters not just to us, but to building a sustainable economy in a sustainable world.

    4. Aspire to be better tomorrow than you are today.

    I’ve always loved the judgment of Kurt Vonnegut’s novel Mother Night:“We are what we pretend to be, so we must be careful about what we pretend to be.” This novel about the postwar trial of a Nazi propaganda minister who was secretly a double agent for the Allies should serve as a warning to those (politicians, pundits, and business leaders alike) who appeal to people’s worst instincts but console themselves with the thought that the manipulation is for a good cause.

    But I’ve always thought that the converse of Vonnegut’s admonition is also true: Pretending to be better than we are can be a way of setting the bar higher, not just for ourselves but for those around us.

    People have a deep hunger for idealism. The best entrepreneurs have the courage that comes from aspiration, and everyone around them responds to it. Idealism doesn’t mean following unrealistic dreams. It means appealing to what Abraham Lincoln so famously called “the better angels of our nature.”

    That has always been a key component of the American dream: We are living up to an ideal. The world has looked to us for leadership not just because of our material wealth and technological prowess, but because we have painted a picture of what we are striving to become.
    If we are to lead the world into a better future, we must first dream of it.

    View at the original source

    0 0






    With the boom in digital technologies, the world is producing over 2.5 exabytes of data every day. To put that into perspective, it is equivalent to the memory of 5 million laptops or 150 million phones. The deluge of data is forecast to increase with the passing day and with it has increased the need for powerful hardware that can support it.

    This hardware advancement refers to faster computing or processing speed and larger storage systems. Companies worldwide are investing in powerful computing with the R&Ds constantly in the race for making improved processors. The current stream of data needs computers that can perform complex calculations within seconds.

    Big data and Machine learning have pushed the limits of current IT infrastructure for processing large datasets effectively. This has led to the development of a new and exciting paradigm of quantum computing that has the power to dramatically increase the speed. But before that, let us understand the current technology and the need for quantum technology.

    Current Computing Technology and Its Limitations
    The technology of processing has come a long way in the past couple of years with the development of finger-nail sized microprocessors (single-chip computer packed with millions of transistors) called integrated circuits. Standing true to Moore’s law, the number of transistors packed in a single chip has doubled every 18 months since the past 50 years. Today, it has reached 2 billion transistors in one chip.

    The semiconductor technology is now making smallest chips with 5 nanometer-sized gates below which it is said the transistor will not work. Now, the industry has simply started increasing the number of processor “cores” so that the performance continues on Moore’s law predictions. However, there come many other software-level restraints to keep this relevant.

    In 2016, two researchers at Lawrence Berkeley National Laboratory created the world’s smallest transistor with gate size of one nanometer. This is a phenomenal feat in computing industry but making a chip with billions of such transistors is going to face many challenges. The industry has already prepared for transistors to stop shrinking further and Moore’s law is likely to come to a stagnant halt.

    As the computations pertaining to current applications like big data processing or intelligent systems get more complex, there is a need for higher and faster computing capabilities than the current processors can supply. This is one of the reasons why people are looking forward to quantum computing.

    What is Quantum Computing
    Quantum Computing merges two great scientific revolutions of this century: computer science and quantum physics. It has all the elements of conventional computing like bits, registers, gates, etc. but on the machinery level, it does not depend on boolean logic. The quantum bits are called qubits. The conventional bits can store 0 or 1 but quantum bits can store 0, 1 and all the possible values (states) between it simultaneously. As it can store the values, it can also process them simultaneously. It can work in parallel doing multiple things at the same time which makes it million times faster than the current computers.

    The working of these computers is little complex and the entire field of quantum computing is still largely abstract and theoretical. The only thing we really need to know is that qubits are stored by atoms or other particles like ions that exist in different states and can be switched between these states.

    Application in Big Data
    The progress in these fields critically relies on processing power. The computational requirement of big data analytics is currently placing a considerable strain on computer systems. Since 2005, the focus has been shifted to parallelism using multiple cores instead of a single fast processor. However, many problems in big data cannot be solved simply by using more and more cores.  Splitting up the work among multiple processors is used but its implementation is complex. The problems need to be solved sequentially where the preceding step is equally important.

    At the Large Hadron Collider (LHC) at CERN, Geneva particles are accelerated, traveling at almost the speed of light within a 27km ring such that 600 million collisions take place in a second wherein only one of the 1 million collisions chosen for preselection. In the preselection process, only 1 out of 10,000 events are passed to a grid of processor cores that further choose 1 out of 100 possible events, hence, making the data process at 10GB/s. At LHC, 5 trillion bits of data is captured every second and after discarding 99% of the data, it still analyses 25 petabytes of data a year!

    Such is the power of quantum computing but the current resources make the application of it in big data, a thing of the future. If it were possible, the computing would be useful for specific tasks such as factoring large numbers that are useful in cryptography, weather forecasting, searching through large unstructured datasets in a fraction of the time to identify patterns and anomalies, etc. The developments in quantum computing could actually make encryption obsolete in a jiffy.
    With such computing powers, it would be one day possible to make large datasets that would probably store complete information such as – genetic of every single human that existed and machine learning algorithms could find patterns in the characteristics of these humans while also protecting the identities of the humans. Also, clustering and classification of data would become a much faster task.

    Looking Forward
    The initial results and developments in quantum technologies are encouraging. In the last fifteen years, quantum computers have grown from 4-qubits to 128 qubits. Google’s 5-qubit computer has demonstrated certain basic calculations; that if scaled up, can perform many complex calculations that will make the quantum computing dream come true one day. However, we are unlikely to see such computers for years or even decades.

    The future says quantum computers will allow faster analysis and integration of our enormous data sets which will improve and transform our machine learning and artificial intelligence capabilities.

    View at the original source

    0 0



    Image credit: Shyam's Imagination Library


    New project to use podcasts, video to illuminate bias, improve decision-making. 

    When it comes to some of the most important decisions we make — how much to bid for a house, the right person to hire, or how to plan for the future — there is strong scientific evidence that our brains play tricks on us.




    Luckily, Mahzarin Banaji has a solution: Understand how your mind works so that you can learn to outsmart it.

    The Richard Clarke Cabot Professor of Social Ethics and chair of the Department of Psychology is launching a new project — dubbed Outsmarting Human Minds — aimed at using short videos and podcasts to expose hidden biases and explore ways to combat them.

    “The behavioral sciences give us insights into what gets in the way of reaching our professional goals, of being true to our own deepest values,” Banaji said. “The science is not new, but its message is still one most people have difficulty grasping and understanding.”

    Banaji and research fellow Olivia Kang, with funding from PricewaterhouseCoopers (PwC) and a grant from Harvard’s Faculty of Arts and Sciences, developed Outsmarting Human Minds as a way to deliver up-to-date thinking about hidden biases in an engaging way.

    “Everyone wants to know what’s happening in their minds, and they want to know what they can do to make better decisions,” Kang said. “The science is out there; the challenge is getting it to the public in a way that captures their interest.”




    The impetus for the project came in part from Banaji’s perspective as a senior adviser on faculty development to Edgerley Family Dean of the Faculty of Arts and Sciences Michael D. Smith.

    Speaking of that role, Banaji said, “I try to expose what the mind sciences have taught us about how we make decisions. The hope is that the faculty will put this information to use … in decisions about how to imagine the future of their disciplines.”

    Banaji has taught on decision-making to any number of organizations, including corporations, nonprofits, and the military. Questions about how to confront hidden biases are common.



    “I want to put the science in the hands of people — or rather, in the heads of people — and have them ask: How can I outsmart my own mind? How can I be the person I want to be?”

    She emphasized that watching a video or listening to a podcast isn’t enough to address hidden bias.
    “Learning brings awareness and understanding. It cannot itself put an end to the errors we make,” she said. “To achieve corrections that will matter to society, we must learn to behave differently.”

    Said Kang: “We want to deliver this information to people in a way that doesn’t make them feel that they’re a bad person if they have these biases. The fact is, we all do. This is about acknowledging that hidden biases are a product of how we’re wired and the culture we live in. And then agreeing that we want to do something about it — that we can use this knowledge to improve the decisions we make in life and at work.”

    View at the original source

    0 0



    Just as artificial intelligence is helping doctors make better diagnoses and deliver better care, it is also poised to bring valuable insights to corporate leaders — if they’ll let it. 




    Image Credit : Shyam's Imagination Library



    At first blush, the idea of artificial intelligence (AI) in the boardroom may seem far-fetched. After all, board decisions are exactly the opposite of what conventional wisdom says can be automated. Judgment, shrewdness, and acumen acquired over decades of hard-won experience are required for the kinds of complicated matters boards wrestle with. But AI is already filtering into use in some extremely nuanced, complicated, and important decision processes.


    Consider health care. Physicians, like executives and board members, spend years developing their expertise. They evaluate existing conditions and deploy treatments in response, while monitoring the well-being of those under their care.

    Today’s medical professionals are wisely allowing AI to augment their decision-making. Intelligent systems are enabling doctors to make better diagnoses and deliver more individualized treatments. These systems combine mapping of the human genome and vast amounts of clinical data with machine learning and data science. They assess individual profiles, analyze research, find patterns across patient populations, and prioritize courses of action. The early results of intelligent systems in health care are impressive, and they will grow even more so over time. In a recent study, physicians who incorporated machine-learning algorithms in their diagnoses of metastatic breast cancer reduced their error rates by 85%. Indeed, by understanding how AI is transforming health care, we can also imagine the future of how corporate directors and CEOs will use AI to inform their decisions.

    Complex Decisions Demand Intelligent Systems


    Part of what’s driving the use of AI in health care is the fact that the cost of bad decisions is high. That’s the same in business, too: Consider that 50% of the Fortune 500 companies are forecasted to fall off the list within a decade, and that failure rates are high for new product launches, mergers and acquisitions, and even attempts at digital transformation. Responsibility for these failures falls on the shoulders of executives and board members, who concede that they’re struggling: A 2015 McKinsey study found that only 16% of board directors said they fully understood how the dynamics of their industries were changing and how technological advancement would alter the trajectories of their company and industry. The truth is that business has become too complex and is moving too rapidly for boards and CEOs to make good decisions without intelligent systems.

    We believe that the solution to this complexity will be to incorporate AI in the practice of corporate governance and strategy. This is not about automating leadership and governance, but rather augmenting board intelligence using AI. Artificial intelligence for both strategic decision-making (capital allocation) and operating decision-making will come to be an essential competitive advantage, just like electricity was in the industrial revolution or enterprise resource planning software (ERP) was in the information age.

    For example, AI could be used to improve strategic decision-making by tracking capital allocation patterns and highlighting concerns — such as when the company is decreasing spending on research and development while most competitors are increasing investment — and reviewing and processing press releases to identify potential new competitors moving into key product markets and then suggesting investments to protect market share. AI could be used to improve operational decision-making by analyzing internal communication to assess employee morale and predicting churn, and by identifying subtle changes in customer preference or demographics that may have product or strategy implications.

    The Medical Model: Advances That Have Enabled AI in Health Care


    What will it take for boards to get on board with AI supplements? If we go back to the health care analogy, there have been three technological advances that have been essential for the application of AI in the medical field:
    • The first advance is an enormous body of data. From the mapping of the human genome to the accumulation and organization of databases of clinical research and diagnoses, the medical world is now awash in vast, valuable new sources of information. 
    • The second advance is the ability to quantify an individual. Improvements in mobile technology, sensors, and connectivity now generate extraordinarily detailed insights into an individual’s health.
    • The third advance is the technology itself. Today’s AI techniques can assimilate massive amounts of data and discern relevant patterns and insights — allowing the application of the world of health care data to an individual’s particular health care situation. These techniques include advanced analytics, machine learning, and natural language processing.
    As a result of the deployment of intelligent systems in health care, doctors can now map a patient’s data, including what they eat, how much they exercise, and what’s in their genetics; cross-reference that material against a large body of research to make a diagnosis; access the latest research on pharmaceuticals and other treatments; consult machine-learning algorithms that assess alternative courses of action; and create treatment recommendations personalized to the patient.

    Three Steps Companies Can Take to Bring AI Into the Boardroom


    A similar course will be required to achieve the same results in business. Although not a direct parallel to health care, companies have their own components — people, assets, history — which could be called the corporate genome. In order to effectively build an AI system to improve corporate decision-making, organizations will need to develop a usable genome model by taking three steps:

    Create a body of data by mapping the corporate genome of many companies and combine this data with their economic outcomes

    Develop a method for quantifying an individual company in order to assess its competitiveness and trajectory through comparison with the larger database; and

    Use AI to recommend a course of action to improve the organization’s performance — such as changes to capital allocation.

    Just as physicians use patient data to create individualized medical solutions, emerging intelligent systems will help boards and CEOs know more precisely what strategy and investments will provide exponential growth and value in an increasingly competitive marketplace. Boards and executives with the right competencies and mental models will have a real leg up in figuring out how to best utilize this new information. While technology is growing exponentially, leaders and boards are only changing incrementally, leaving many legacy organizations further and further behind.

    It’s time for leaders to courageously admit that, despite all their years of experience, AI belongs in the boardroom.




    View at the original source

















    0 0
  • 10/26/17--19:59: How We’re Smart 10-27

  • We’re all intelligent in multiple and varying ways, and we can grow those intelligences, too....




    People have a wide range of capacities. What if, instead of asking, “How smart am I?” we encouraged kids to ask, “How am I smart?”

    Here, we provide an overview of that work on intelligence — along with ways that educators can bring these ideas into their own classrooms.

    Intelligence is Multiple

    What if, instead of asking, “How smart am I?” we encouraged kids to ask, “How am I smart?”
    People have a wide range of capacities, and there are many ways to be smart. In his foundational work on multiple intelligence theory, educational psychologist and Project Zero pioneer Howard Gardner has identified eight distinct intelligences:
    • Verbal
    • Logical/mathematical
    • Bodily-kinesthetic
    • Musical
    • Spatial
    • Interpersonal
    • Intrapersonal
    • Naturalistic
    Everyone possess all of these intelligences, but we also each have unique strengths and weaknesses. Some people have strong verbal and musical intelligence but weak interpersonal intelligence; others may be adept at spatial recognition and math but have difficulty with bodily-kinesthetic intelligence. And everyone is different; strength in one area does not predict strength in any other.

    These intelligences can also work together. Different tasks and roles usually require more than one type of intelligence, even if one is more clearly highlighted.
    Furthermore, we can exhibit our intelligences through our ideas, creations, and performances — but test scores do not necessarily measure any sort of intelligence.

    For educators, the lesson here is that students learn differently, and express their strengths differently. “If we all had exactly the same kind of mind and there was only one kind of intelligence, then we could teach everybody the same thing in the same way and assess them in the same way and that would be fair,”

    Gardner has said. “But once we realize that people have very different kinds of minds, different kinds of strengths … then education, which treats everybody the same way, is actually the most unfair education.”

    Intelligence is Learnable

    These multiple intelligences are not fixed or innate. They’re partially the result of our neural system and biology, but they also develop through our experiences and through our ability to persist, imagine, and reflect. 

    Learning expert Shari Tishman and her Project Zero colleagues have highlighted seven key critical thinking mindsets that can set us up to effectively learn and think in today’s world:
    • Being broad and adventurous
    • Wondering, problem finding, and investigating
    • Building explanations and understandings
    • Making plans and being strategic
    • Being intellectually careful
    • Seeking and evaluating reasons
    • Being metacognitive
    By embracing these mindsets, we can actually shape and cultivate our intelligences. For example, being open-minded and careful in our thinking, as opposed to being closed-minded and careless, can be predictive of flexing and growing our intelligences.


    View at the original source




    0 0






    Exactly how asthma begins and progresses remains a mystery, but a team of Harvard Medical School researchers has uncovered a fundamental molecular cue that the nervous system uses to communicate with the immune system, which may potentially trigger allergic lung inflammation leading to asthma.
    Their insights into this neuro-immune cross talk are published Sept. 13 in Nature.

    “Our findings help us understand how the nervous system is communicating with the immune system, and the consequences of it,” said co-senior author Vijay Kuchroo, the HMS Samuel L. Wasserstrom professor of neurology and senior scientist at Brigham and Women’s. The team included  researchers at Harvard Medical School, Brigham and Women’s Hospital, and the Broad Institute of MIT and Harvard

    Kuchroo is also an associate member of the Broad and the founding director of the Evergrande Center for Immunologic Diseases of HMS and Brigham and Women’s.

    “What we’re seeing is that neurons in the lungs become activated and produce molecules that convert immune cells from being protective to being inflammatory, promoting allergic reactions,” he said.

    The research team—led by Patrick Burkett, HMS instructor in medicine and a pulmonologist and researcher at Brigham and Women’s; Antonia Wallrapp an HMS visiting graduate student in neurology at the Evergrande Center; Samantha Riesenfeld, HMS research fellow in neurology in the Klarman Cell Observatory (KCO) at the Broad; Monika Kowalczyk of the KCO; Aviv Regev, Broad core institute member and KCO director; and Kuchroo—closely examined lung-resident innate lymphoid cells (ILCs), a type of immune cell that can play a role in maintaining a stable environment and barrier in the lungs but can also promote the development of allergic inflammation.

    Single-cell RNA sequencing

    Using a technique known as single-cell RNA sequencing, the team explored more than 65,000 individual cells that exist under normal or inflammatory conditions, looking for genes that were more active in one state or subpopulation versus another.

    “By surveying thousands of individual cells, we were able to define the transcriptional landscape of lung-resident ILCs, observing changes in discrete subpopulations,” said Kowalczyk.

    “To really understand the puzzle that is allergy and asthma, we need to closely examine each of the pieces individually and understand how they fit together into an ecosystem of cells,” said Regev. “That’s what single-cell analysis lets you do. And when you look this closely, you find that pieces that you thought were quite similar are subtly but profoundly different. Then you start to see where each piece really goes.”

    Among many distinguishing genes they found, one in particular stood out: Nmur1, a receptor for the neuropeptide NMU.

    In laboratory and animal model experiments, the team confirmed that NMU signaling can significantly amplify allergic inflammation when high levels of alarmins—molecules known to trigger immune responses—are present.

    The team also observed that ILCs co-located with nerve fibers in the lung. Neurons in the lung can induce smooth muscle contractions that manifest themselves as coughing and wheezing, two central symptoms of asthma.

    Coughing and inflammation

    “Coughing is something regulated and controlled by the nervous system so it’s intriguing that our findings point to a role for NMU, which can induce both smooth muscle contraction and inflammation,” said Burkett.

    Interestingly, two additional Nature papers released simultaneously with the Regev and Kuchroo team’s study revealed that ILC2 cells in the gut also express Nmur1, take on an inflammatory state when exposed to NMU and live in close proximity to NMU-producing nerve cells.

    “We anticipate that the NMU-NMUR1 pathway will also play a critical role in amplifying allergic reactions in the gut and promote development of food allergies,” said Kuchroo.

    In addition to uncovering a novel neuro-immune pathway that leads to inflammation, the team also hopes their findings will lead to new therapeutic insights for how to potentially prevent or treat allergic asthma.

    “We may have identified a way of blocking allergic lung inflammation by controlling neuropeptide receptors,” said Riesenfeld. “This work represents a mechanistic insight that could lead to the development of a new therapeutic approach for preventing asthma.”

    “All forms of allergy and inflammation involve complex interactions between many cells and tissues,” Regev added. “Working collaboratively to identify and catalog all these various players and listening to what they say to each other can teach us surprising things about how allergies work and show us new opportunities to intervene.”


    Support for this study was provided by the Food Allergy Science Initiative; the Klarman Family Foundation; the National Institute of Allergy and Infectious Diseases; the National Heart, Lung, and Blood Institute; the Howard Hughes Medical Institute; and other sources. 


    View at the original source

    0 0

    Technique has potential to help reverse the most common type of disease-associated mutations.



























    Harvard and Broad Institute researchers have developed a DNA base editor that transforms A•T base pairs into G•C base pairs, and could one day be used to treat many common genetic diseases.

    Scientists at Harvard University and the Broad Institute of MIT and Harvard have developed a new class of DNA base editor that can alter genomic structure to help repair the type of mutations that account for half of human disease-associated point mutations. These mutations are associated with disorders ranging from genetic blindness to sickle-cell anemia to metabolic disorders to cystic fibrosis.

    A team of researchers led by David Liu, professor of chemistry and chemical biology at Harvard University and a core institute member of the Broad, developed an adenine base editor (ABE) capable of rearranging the atoms in a target adenine (A), one of the four bases that make up DNA, to resemble guanine (G) instead, and then tricking cells into fixing the other DNA strand to make the change permanent. The result is that what had been an A•T base pair is changed to a G•C base pair. The new system is described in a paper published online in the journal Nature.




    In addition to Liu, the study was led by Nicole Gaudelli, a postdoctoral fellow in Liu’s lab; Alexis Komor, a former postdoctoral fellow in Liu’s lab who is now an assistant professor at the University of California, San Diego; graduate student Holly Rees; and former postdoctoral fellows Ahmed H. Badran and David I. Bryson.

    The new system transforms A•T base pairs into G•C base pairs at a target position in the genome of living cells with surprising efficiency, the researchers said, often exceeding 50 percent, with virtually no detectable byproducts such as random insertions, deletions, translocations, or other base-to-base conversions. The adenine base editor can be programmed by researchers to target a specific base pair in a genome using a guide RNA and a modified form of CRISPR-Cas9 that no longer cuts double-stranded DNA.

    Being able to make this type of conversion is particularly important because approximately half of the 32,000 disease-associated point mutations already identified by researchers are a change from a G•C base pair to a A•T base pair.

    Liu said that particular change is unusually common in part because about 300 times a day in every human cell, a spontaneous chemical reaction converts a cytosine (C) base into uracil (U), which behaves like thymine (T). While there are natural cellular repair mechanisms to fix that spontaneous change, the machinery is not perfect and occasionally fails to make the repair. The result can be the mutation of the G•C base pair to an A•U or A•T base pair, which can lead to certain genetic diseases.

    “Because of this slight chemical instability of the Cs in our genome, about 50 percent of pathogenic point mutations in humans are of the type G•C to A•T,” said Liu said. “What we’ve developed is a base editor, a molecular machine, that in a programmable, irreversible, efficient, and extremely clean way can correct these mutations in the genome of living cells. For some target sites, that conversion reverses the mutation that is associated with a particular disease.”

    A major addition to genome-editing technologies, the adenine base editor joins other base-editing systems recently developed in Liu’s lab, such as BE3 and its improved variant, BE4. Using these base editors, researchers can now correct all the so-called “transition” mutations — C to T, T to C, A to G, or G to A — that together account for almost two-thirds of all disease-causing point mutations, including many that cause serious illnesses that currently have no treatment. Additional research is needed to enable the adenine base editor to target as much of the genome as possible, as Liu and his students previously did through engineering variants of BE3.

    At first glance, Liu said, it might appear as though developing the adenine base editor would be a straightforward process: Simply replace the enzyme in BE3 that performs the “chemical surgery” to transform C into U with one that could convert A into I (inosine), a nucleotide that behaves similarly to G. Unfortunately, he said, there is no such enzyme that works in DNA, so Liu and colleagues made the unusual choice to evolve their own DNA adenine deaminase, a hypothetical enzyme that would convert A to I in DNA.

    “This wasn’t a small decision, because we’ve had a longstanding rule in the lab that if step one of your project is to evolve the starting material that’s needed for the rest of the project to begin, that’s not a very good project, because it’s really two major projects,” Liu said. “And if you have to spend years just to get the starting material for the rest of your project, that’s a tough road.

    “In this case, we felt the potential impact was significant enough to break the rule, and I’m very fortunate that Nicole [Gaudelli] was brave enough to take on the challenge.”

    The stakes were particularly high for Gaudelli, Liu said, “because if we weren’t able to complete step one and evolve a DNA adenine deaminase, then step two wouldn’t go anywhere, and we would have little to show for all the work.”

    “Protein evolution is still largely an art as much as it is a science,” Liu said. “But Nicole has amazing instincts about how to interpret the results from each stage of protein evolution, and after seven generations of evolution, she succeeded in evolving a high-performance A base editor, which we call ABE7.10.”




    The road that led to the adenine base editor required more than just evolving the starting material. After a year of work and several initial attempts that resulted in no detectable DNA editing of A•T base pairs, the team began to see the first glimmers of success, Liu said. Following three rounds of evolution and engineering, the adenine base editors were working deceptively well, until the team discovered that the system would only work on certain DNA sequences.

    “At that point we could have pulled the trigger and reported a base editor that works well only at certain sites, but we thought the sequence requirements would really limit its usefulness and discourage others from moving the project forward, so we went back to the well of evolution. We changed the selections to force a base editor that would process all sites, regardless of their sequence,” Liu said. “That was a tough call, because at that point we had been working well over a year on the project, and it was very exciting that we were seeing any base editing on A•T base pairs in DNA at all.”

    The team restarted its efforts with several additional rounds of evolution and engineering, now testing their adenine base editors against 17 genetic sequences that included all possible combinations of DNA bases surrounding the target A, Liu said. The final ABE7.10 variant edited sites with an average efficiency of 53 percent, and produced virtually no unwanted products.

    To demonstrate the adenine base editor’s potential, Liu and colleagues used ABE7.10 to correct a mutation that causes hereditary hemochromatosis in human cells. They also used ABE7.10 to install a mutation in human cells that suppresses a disease, recreating the so-called “British mutation” found in healthy individuals who would normally develop blood diseases like sickle cell anemia. The mutation instead causes fetal hemoglobin genes to remain active after birth, protecting them from the blood diseases.

    While the development of the adenine base editor is an exciting development in base editing, more work remains before base editing can be used to treat patients with genetic diseases, including tests of safety, efficacy, and side effects.

    “Creating a machine that makes the genetic change you need to treat a disease is an important step forward, but it’s only one part of what’s needed to treat a patient,” Liu said. “We still have to deliver that machine, we have to test its safety, we have to assess its beneficial effects in animals and patients and weigh them against any side effects. We need to do many more things. But having the machine is a good start.”


    View at the original source


    0 0





    Scientists at Wesleyan University have used electroencephalography to uncover differences in how the brains of Classical and Jazz musicians react to an unexpected chord progression.

    Their new study, published in the journalBrain and Cognition, sheds new light on the nature of the creative process.

    “I have been a classical musician for many years, and have always been inspired by the great jazz masters who can improvise beautiful performances on the spot,” explained study author Psyche Loui. “Whenever I tried to improvise I always felt inhibited and self-conscious, and this spurred my questions about jazz improvisation as a model for creativity more generally: What makes people creative improvisers, and what can this tell us about how we can all learn to be more creative?”

    The researchers used EEG to compare the electrical brain activity of 12 Jazz musicians (with improvisation training), 12 Classical musicians (without improvisation training), and 12 non-musicians while they listened to a series of chord progressions. Some of the chords followed a progression that was typical of Western music, while others had an unexpected progression.

    Louie and her colleagues found that Jazz musicians had a significantly different electrophysiological response to the unexpected progression, which indicated they had an increased perceptual sensitivity to unexpected stimuli along with an increased engagement with unexpected events.

    “Creativity is about how our brains treat the unexpected,” Loui told PsyPost. “Everyone (regardless of how creative) knows when they encounter something unexpected. But people who are more creative are more perceptually sensitive and more cognitively engaged with unexpectedness. They also more readily accept this unexpectedness as being part of the vocabulary.

    “This three-stage process: sensitivity, engagement, and acceptance, occurs very rapidly, within a second of our brains encountering the unexpected event. With our design we can resolve these differences and relate them to creative behavior, and I think that’s very cool.”

    Previous research has found that Jazz improvisers and other creative individuals show higher levels of openness to experience and divergent thinking — meaning the ability to “think outside the box.”

    But without additional research it is unclear if the new findings apply to other creative individuals who are not musicians.

    “We looked at three groups of subjects: jazz musicians, classical musicians, and people with no musical training other than normal schooling, so the results are most closely tied to musical training. It remains to be seen whether other types of creative groups, e.g. slam poets, cartoonists, interpretive dancers, etc. might show the same results,” Loui explained.

    “It would also be important to find out whether these differences emerge as a result of training, or whether they reflect pre-existing differences between people who choose to pursue training in different styles. We are currently conducting a longitudinal study to get at that question.”

    “This is the first paper of a string of research coming from our lab that use different methodologies to understand jazz improvisation,” Loui added. “We are also doing structural and functional MRI, as well as more behavioral testing, including psychophysical listening tests and also production tests, where we have people play music in our lab.”

    The study, “Jazz musicians reveal role of expectancy in human creativity“, was also co-authored by Emily Przysinda, Tima Zeng, Kellyn Maves, and Cameron Arkin. 


    View at the original source 


    0 0






    Probiotic bacteria in yogurt influence the balance of gut microbiota, which is associated with behavioral changes. This effect can be explained by the existence of a gut-brain axis.

    Yogurt consumption increases the ingestion of probiotic bacteria, in particular Lactobacilli and Bifidobacteria, and may therefore affect the diversity and balance of human gut microbiota. Previous research found that changes in gut microbiota moderate the peripheral and central nervous system, resulting in altered brain functioning, and may have an impact on emotional behavior, such as stress and anxiety.

     Gut-brain axis

    The described effect suggests the existence of a gut-brain axis. Because of the bidirectional communication between the nervous system and the immune system, the effects of yogurt bacteria on the nervous system cannot be separated from effects on the immune system.

    Researchers suggest that the communication between gut microbiota and the brain can be influenced by the intake of probiotics, which may reduce the level of anxiety and depression, and affect brain activity that controls emotions and sensations. Autism patients often suffer from gastrointestinal abnormalities, whereby viral infections over pregnancy have an impact on the long term, this might be reversed through consumption of specific bacteria, also found in yogurt.

    As the composition of gut microbiota is different for each individual, changes in the balance and content of common gut microbes affect the production of short chain fatty acids butyrate, propionate, and acetate.

    These fermentation products improve host metabolism by stimulating glucose and energy homeostasis, regulating immune responses and epithelial cell growth, and also supporting the functioning of the central and peripheral nervous systems.

    View at the original source

    0 0




    Metasurface generates new states of light for fundamental research and applications.

    There’s nothing new thing under the sun — except maybe light itself.

    Over the last decade, applied physicists have developed nanostructured materials that can produce completely new states of light exhibiting strange behavior, such as bending in a spiral, corkscrewing and dividing like a fork.

    These so-called structured beams not only can tell scientists a lot about the physics of light, they have wide range of applications from super resolution imaging to molecular manipulation and communications.

    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a tool to generate new, more complex states of light in a completely different way.
    The research is published in Science.

    “We have developed a metasurface which is a new tool to study novel aspects of light,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the paper. “This optical component makes possible much more complex operations and allows researchers to not only explore new states of light but also new applications for structured light.”

    The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.



    The new metasurface connects two aspects of light, known as orbital angular momentum and circular polarization (or spin angular momentum). Polarization is direction along which light vibrates. In circularly polarized light, the vibration of light traces a circle. Think about orbital angular momentum and circular polarization like the motion of a planet. Circular polarization is the direction in which a planet rotates on its axis while orbital momentum describes how the planet orbits the sun.

    View at the original source

    0 0




    Harvard study shows how intermittent fasting and manipulating mitochondrial networks may increase lifespan.

    Manipulating mitochondrial networks inside cells — either by dietary restriction or by genetic manipulation that mimics it — may increase lifespan and promote health, according to new research from Harvard T.H. Chan School of Public Health.

    The study, published Oct. 26 online in Cell Metabolism, sheds light on the basic biology involved in cells’ declining ability to process energy over time, which leads to aging and age-related disease, and how interventions such as periods of fasting might promote healthy aging.

    Mitochondria — the energy-producing structures in cells — exist in networks that dynamically change shape according to energy demand. Their capacity to do so declines with age, but the impact this has on metabolism and cellular function was previously unclear. In this study, the researchers showed a causal link between dynamic changes in the shapes of mitochondrial networks and longevity.

    The scientists used C. elegans (nematode worms), which live just two weeks and thus enable the study of aging in real time in the lab. Mitochondrial networks inside cells typically toggle between fused and fragmented states. The researchers found that restricting the worms’ diet, or mimicking dietary restriction through genetic manipulation of an energy-sensing protein called AMP-activated protein kinase (AMPK), maintained the mitochondrial networks in a fused or “youthful” state. In addition, they found that these youthful networks increased lifespan by communicating with organelles called peroxisomes to modulate fat metabolism.

    “Low-energy conditions such as dietary restriction and intermittent fasting have previously been shown to promote healthy aging. Understanding why this is the case is a crucial step toward being able to harness the benefits therapeutically,” said Heather Weir, lead author of the study, who conducted the research while at Harvard Chan School and is now a research associate at Astex Pharmaceuticals. “Our findings open up new avenues in the search for therapeutic strategies that will reduce our likelihood of developing age-related diseases as we get older.”

    “Although previous work has shown how intermittent fasting can slow aging, we are only beginning to understand the underlying biology,” said William Mair, associate professor of genetics and complex diseases at Harvard Chan School and senior author of the study. “Our work shows how crucial the plasticity of mitochondria networks is for the benefits of fasting. If we lock mitochondria in one state, we completely block the effects of fasting or dietary restriction on longevity.”

    Next steps for the researchers including testing the role mitochondrial networks have in the effect of fasting in mammals, and whether defects in mitochondrial flexibility might explain the association between obesity and increased risk for age-related diseases.

    View at the original source

    0 0





    Lately, in every article or newsletter I read about user experience design, the terms “diversity” and “inclusive design” flash before my eyes. The UX design community has been buzzing about diversity and inclusion. Design professionals, in the corporate world and agencies, are racing to show how diversity makes their teams strong and unique, how they design not only for users of all ages, but of genders, races, levels of impairment and disability, culture and ethnicity. Diversity and inclusive design are no doubt on the forefront, as we come together as a global economy.

    I recently had the opportunity to be a speaker at an event hosted by Designers & Geeks in San Francisco. The theme was diversity and design — what roles do diversity and inclusivity play in design today, and why is that important?

    This got me thinking about what these terms really mean to me. While I have been in the UX design industry for many years now, I started recalling experiences outside of my career. About 20 years ago I migrated to the United States, still at the earlier stages of my adult life. Anyone who has tried to make a new home in a country with a totally different culture and ideology can tell you it’s one of the most overwhelming things you could experience. Many of the problems that users encounter with a new environment, are caused by design teams with biases and assumptions about how things should work. When users encounter such design biases, they are often forced to unlearn their prior mental models and learn a new approach — essentially having to adapt their thinking and behavior in order to use the product. Clearly this is not user centered or universal design, and we should do all that we can to reduce this gulf of evaluation for our users.

    I appreciated this speaking opportunity as a chance to highlight all the efforts towards diversity and inclusivity that I see around me at IBM Design. More so, I appreciated this chance to reflect upon what these concepts really mean to me. Those 20 years ago as a newcomer to the U.S., I experienced first hand what it feels like to be on the outside looking in. These kind of experiences help us see what it means to truly be inclusive, and how the presence or absence of inclusivity has a huge impact on people and outcomes.

    Thoughts of diversity and inclusion were relevant and important to me long before these words were used in advertising campaigns. Today, I’m glad to be in an industry that has come to value them, and is working to make these ideas a larger part of our everyday lives. When it came time for me to prepare for my talk, I thought about how diversity plays a role in my job today. As design practitioners, we must pursue a diversity of approach in all that we do: from how we make things (that is, our design approach), to who we make them with (diverse teams), to who we make them for (our users). I didn’t need to look far as IBM Design Thinking features these two core principles:
    • Focus on user outcomes
    • Diverse empowered teams
    You may have heard these principles being repeated so often that they sometimes almost lose their meaning. However these terms are truly embedded into our design culture. As designers, we don’t miss any opportunity to use our super power: empathy. Whether it’s the IBM Accessibility practice or the sponsor user program, our goals revolve around inclusivity and empathy. We constantly remind ourselves that we are not our users. As a global company, we recruit people and work with clients from different cultures, with different perspectives. Diversity and inclusiveness are close to the heart of IBM Design.



                                                    Source: IBM Design
    I’ve seen these principles applied to everything we do at IBM Design, over and over again. With incoming design hires, we offer an in-depth, immersive design thinking bootcamp, where we not only educate new designers on the methods, tools and guidelines, but we also introduce and foster empathy, experientially via empathy map techniques, storyboards, journey maps, etc. From the very beginning, we teach all our design recruits to put the user at the very center of whatever business challenge they are working on.

    When I was visiting one of our bootcamps some time ago, I got to sit in on a user research session where the organizers brought in a visually impaired person as part of user research study. The participants heard a first-hand account from the speaker of what it was like to navigate interfaces with a visual disability. I also got to see early career designers carry out low vision simulations and truly get a feel of what it’s like to lack visual acuity. Using filters that simulated various visual disabilities, designers were able to quickly test designs for contrast, type scale and visual clarity. These are just some of the many examples that I have seen inclusivity deeply integrated into our design practice. I was struck with the dedication of our IBM Design team to constantly put the user at the core of our work, and bringing that into our design education.

    I believe that we have a moral responsibility to embrace diversity in all that we do. It is also essential to the success of our teams. When building teams we have to realize that, we aren’t just assigning resources — we are framing our approach to the problem. Each team member brings their unique point of view and expertise to the team, widening the range of possible outcomes. If you want to generate a breakthrough idea, intentionally form diverse teams by design.

    Diverse teams approach the same problem from many perspectives. They tend to generate more ideas, making them more effective problem solvers. While it takes effort to align different perspectives, it’s at the cross section of our differences that our most meaningful innovations originate. Diverse teams that believe and practice inclusive principles, will have the deepest impact in building products and experiences designed for everyone.

    We need to consider all spectrums of diversity and inclusion: visible differences (genders, race, language etc.), non-visible differences (e.g., LGBT) and diversity of mindset (different thoughts, perspectives, experiences). Diversity and inclusivity are not just buzzwords. These words are burgeoning with potential, and have the power to move our society towards something better. A case in point is the “Inclusion drives innovation” theme of this year’s U.S. National Disability Employment Awareness Month (October). As I look around at the work we do at IBM, the design community, the design approach and ethos, I am proud to say that I am a part of a design culture that truly appreciates the meaning of innovation through diversity and inclusiveness.


    0 0





    Innovation is a leading priority for CEOs: more than 70% list it as one of their top three areas of focus. Yet only 16% of companies we’ve surveyed believe that they’re better innovators than their peers.

    What’s holding them back? In our experience, innovators typically fall short for one of two reasons. Either they pursue the wrong innovation model for their business and competitive context, or they don’t support a good model with the capabilities it requires.

    BCG recently studied more than 100 of the world’s most innovative companies—industry leaders in TSR and fixtures in BCG’s annual innovation report. (See, for example, The Most Innovative Companies 2016: Getting Past “Not Invented Here,” BCG report, January 2017.) Our goal was to determine which types of innovation models the leaders use, which models are most successful in which industries, and which underlying capabilities are necessary to deliver on each model. 

    Six Innovation Models

    Our research revealed six distinct innovation models: creator, solution builder, leverager, expander, defender, and fast follower. (See the exhibit.)

    Let’s take a quick look at these models and the types of companies that embody them: 
    • Creators fit the popular notion of highly innovative companies. Typically led by a strong, bold, visionary leader, they disrupt their core markets, protect their intellectual property, and make highly focused big bets that become the stuff of industry lore. Apple, which had a TSR of 21.2% from 2008 through 2017, is the classic example.
    • Solution builders look to the market for inspiration, drawing on observations and deep insight to address customer priorities and problems. Nike (16.5% TSR) typifies this model, combining customer insights with cutting-edge design and technology. For instance, shoe-based sensors link to web-based platforms offering highly personalized feedback that customers value. 
    • Leveragers create a superior business model and then capitalize on it to sustain their industry leadership. Zara (whose parent company had a TSR of 16.8%) is a Spanish retailer whose fast-cycle innovation and fashion-forward designs changed the industry. At the heart of Zara’s success are its breakthrough design, manufacturing, and distribution processes, which dramatically shorten the time it takes for new items to reach stores.
    • Expanders apply their core capabilities in new ways to take over adjacent markets and spur growth. Pharmaceutical innovator Gilead (14.4% TSR) continually enters new disease categories and markets in search of growth, achieving success through strong management, repeatable R&D and manufacturing processes, and a tolerance for risk that enables a long-term view. By acquiring Pharmasset in 2011, for instance, Gilead was able to develop two best-in-class treatments for hepatitis C and gain access to that promising market.
    • Defenders tend to win in mature or slow-changing industries and to innovate defensively in order to protect their advantage. As technology transforms more and more industries, adhering to this model becomes increasingly risky. The key to success is the ability to monitor the landscape for potentially disruptive innovations and to defend against them using tactics such as partnerships and acquisitions. When Allstate Insurance (6.4% TSR) used this approach, it was able to identify the shift to online and app-based products—and to acquire pioneer Esurance to keep from falling behind.
    • Fast followers optimize their capabilities across all dimensions in order to quickly respond to—and often improve upon—competitive innovations. Reckitt Benckiser Group (14.7% TSR) is a best-in-class fast follower in the consumer products industry, which is characterized by low consumer-switching costs and short product development cycles. To minimize risk and maximize speed, the company focuses technical capability and resource investment downstream, in product testing, with minimal energy spent up front, in consumer insight and ideation.

    Context Is Critical

    Choosing the right innovation model for your company is all about context. Industry context matters because only a subset of models can succeed in most industries. Some models are better suited to—and increase shareholder value in—certain industries and sectors than others. For example, four models drive TSR premiums in consumer retail:
    • Creators take on more risk but can achieve dramatic success. Lululemon Athletica (15.6% TSR), for example, capitalized on the growing yoga movement by offering a distinctive life style brand that encompasses everything from the actual products to the in-store customer experience to corporate philanthropy.
    • Solution builders create loyalty by understanding specific shopper segments and meeting their needs. For instance, Target (8.1% TSR) delivers a “cheap but chic” set of offerings that meet the needs of its young, often trendy customers. 
    • Leveragers create a superior business model and then capitalize on it to sustain a position of industry leadership. Costco (13.4% TSR), for example, combines everyday low prices, a lean supplier network, and a members-only approach to stand out from the retail pack. 
    • Expanders achieve rapid share growth by moving into adjacent markets. For instance, Amazon (30.3% TSR) brings its consumer data analytics, logistics capabilities, and exceptional customer service to an ever-expanding number of retail sectors, including fashion, luxury apparel, and—with the company’s recent purchase of Whole Foods—brick-and-mortar grocery. 
    Companies struggle when they pursue an innovation model that their industry doesn’t reward. For instance, retailer Sears (–23.6% TSR) used the defender model, counting on its brand recognition and network of brick-and-mortar stores to stay ahead. But when agile online players upended the retail industry, Sears lost its edge.

    A company’s individual context is also critical when choosing the best innovation model: How important is innovation to the company’s strategy, its competitive position in the larger market, and the capabilities and advantages that set it apart? As the examples above show, companies in the same industry can succeed with different models—but the chosen model must align with a company’s strategy, strengths, and capabilities. For example, Amazon and Costco both have advantaged—but different—business models. The expander model is a better choice for Amazon because it reaches a much broader pool of consumers and drives more rapid top-line growth, both of which align closely with the company’s strategy and ambition.

    Answering a set of common questions can reveal your company’s context. Is innovation seen as a growth engine or a defensive tool in your overall corporate strategy? How strong is your company’s competitive position, and how durable is the source of your competitive advantage? How important is brand, and what is the relative strength of your brand equity? How robust are your innovation-related capabilities compared with others in your industry? How much are you willing and able to invest in innovation? And, most important, how quickly does your sector change—and what value can be gained if your organization stays ahead of the curve?

    When choosing a model, look for one that competitors either aren’t using at all or are using poorly.

    When choosing a model, look for one that competitors either aren’t using at all or are using poorly. Consider the investment required in terms of dollars, time, and the cost of upgraded capabilities, and then filter the options through the lens of your ambition and resources.

    Making It Work

    Migrating to a new model or better aligning your capabilities with an existing one are the most challenging aspects of transforming a company’s innovation capability. The six innovation models are not abstract ideas. Each has a set of design principles and characteristics that govern the whole. 
    It helps to have an innovation blueprint clearly laying out all the interconnecting pieces that must align with and support the model. These include the company’s organizational structure and culture; tools and processes for idea generation, commercialization, and portfolio management; and metrics and incentives to drive, track, and measure results. Such a blueprint can help companies commit to and reinforce their models through the design decisions that flow through their organizations. Consider the following: 
    • The fast-follower model adopted by Reckitt Benckiser has potential for success in the consumer products industry, but the company’s individual success is enabled by other factors as well, such as a flat organizational structure that maximizes speed to market. 
    • Under Armour is a solution builder. To build more targeted solutions, the company invests in advanced analytics to better understand what the data reveals about the behaviors and needs of its fitness community.
    • Amazon’s best-in-class expander model would not work without the company’s high tolerance for risk, which is reflected in its internal metrics and people incentives. 
    In our experience, the six innovation models offer a powerful way for organizations to evaluate and refine their innovation strategies. They also help executive teams grapple with critical questions, such as, Which model are we pursuing and why? Are our processes and organization aligned with that model? Does the model confer advantage in our industry? Which models are rivals pursuing—and how well are they doing? Should we reconsider our innovation strategy and model? What investments and capabilities would a shift in those areas require?

    Working through these questions will help companies choose the right model, develop the supporting engine to drive it forward, and reap the growth dividends that accrue from innovation success.

    View at the original source

    0 0





    Humans are social creatures. We crave interaction and attention. We like to be treated as individuals and not as “a number”. For example, you feel better when you are recognised at your favorite restaurant or when you are addressed by your name when flying on a plane or when a hotel receptionist says “welcome back”. We all love to be treated as individuals. Delivering the personal touch is challenging, companies have used loyalty programs and CRM systems to focus their efforts on their “most valuable customers”, but this alienates everyone else. Rapid advances in technology have enabled some leading firms to deliver “mass personalisation experiences” and in the process have delighted their customers. So, in this age of personalisation, does your bank offer you a personal touch?

    Until reasonably recently, the only way to interact with banks was through their branch networks, and those branches had branch managers and other staff who took pride in knowing their customers. However, with the proliferation of channels and increasing time pressures on customers, the need to visit bank branches came down and banking interactions started becoming less personal. As technology became more capable and pressures on costs increased, banks focused more on “standardisation” and the personal touch continued to disappear. As with many other industries, disruptors sensed an opportunity. Fintechs and many new age banks realised that customers were looking for something more and were ready to choose them because of the unique experience that they had to offer.

    But it is not just about the experience. It is not just about convenience. It is not just about cost. Customers today want to be treated as individuals; they want products and services that are tailored to their specific needs. Ideally, they want companies to anticipate their needs, but in a non-intrusive, non-creepy way. When it comes to financial services, they want products and services which cater to their specific requirements, for example offering pre-approved loans when their balances are low and there are upcoming payments, or tailoring terms, rates, pricing to meet their individual requirements.
         
    For banks and other financial institutions, aligning their products and services to match customer expectations is a tremendous challenge. To make it more challenging, they need to deliver personalized products, tailored recommendations, and individual communication - profitably. The good news is that technology can help. Insights derived from predictive analytics can help fine-tune the target customers and identify the key parameters for new products.

    Technology can help banks create new loan products in minutes, so they have the flexibility to offer a unique loan product to each and every customer – if they choose. Backed by analytical insights, banks also know the most preferred channels to reach out to their customers. And when banks reach customers at the right time using the right channel, customers are much more likely to engage in interactive conversations. With mobile banking apps in their smartphones, customers are far more connected to their banks. So the personal touch is not just restricted to the first engagement during the initial “sale” of the product but extends throughout the loan lifecycle.

    For example, lending provides considerably more opportunities to interact with customers during the loan servicing period, than in the short time when the original loan is being “sold”. During the life of the loan, a well-crafted personalized approach can translate into repeat business for the bank. With a higher conversion rate, personalization in lending can bring down the overall cost of customer acquisition. And let’s not forget that while cost is an important driver, it is not the only driver. Research has shown that customers are willing to pay more if they are offered a product/service that suits their needs and is wrapped in an experience that matches their expectations. Thus, personalization can also serve as an enabler for a unique positioning and premium pricing.

    This kind of focus on personalization may appear to be complex, but with the advances in technology, backed by artificial intelligence and machine learning, it can be handled quite easily. And as technology continues to evolve, it is easy to see a time when bots / digital assistants working on behalf of customers will be interacting - with growing levels of autonomy – with bots working on behalf of financial services providers. Robust processes, reliable systems and self-directing technologies that can handle the details at micro level while facilitating large volumes at high speed at macro level would be absolutely critical in these kinds of scenarios.

    Banks are founded on trust. In the past they had a strong personal connection with their customers, however over time that connection has eroded. As consumers’ attention spans shrink (to about 8 seconds now), and as options continue to explode, it is critical for banks to reconnect with their customers at a personal level. The good news is that banks know how to do it and the better news is that the technology now exists to enable them to do it profitably.

    View at the original source

    0 0

    By Sadanand dhume (Resident fellow at the American Enterprise Institute in Washington, DC.)




    Is India in danger of becoming a Hindu Pakistan?

    Though, it is a bit preposterous even to imagine such a thought in India, (not as of now at least) in other countries, such thoughts are germinating in the minds of people.

    In Washington this question, once too ludicrous to contemplate seriously, has lately acquired currency. For an Indian, it’s a query that can trigger a powerful emotional response. At one extreme stand those who greet it with bilious outrage. At the other are those for whom it evokes quivering concern.

    Let me start by stating the obvious: the odds of the officially secular republic of India ever fully mirroring the Islamic republic of Pakistan are vanishingly small.

    To begin with, look at demographics. About one-fifth of Hindu-majority India’s population consists of religious minorities; the Pew Research Center predicts that this will rise slightly to nearly one-fourth by 2050. By contrast, Pakistan is 96% Muslim. The only minority group of note is the beleaguered Shia community, estimated to number between 10% and 15% of the country’s 208 million people.

    Founding principles matter too. India was born as a secular republic in 1950. Indira Gandhi only wedged the word “secular” into the Constitution’s preamble in 1976, during Emergency, her infamous suspension of democracy. But right from the start India’s Constitution guaranteed equality before the law and freedom of worship, and prohibited any religious test for office.

    By contrast, as early as 1949 the Objectives Resolution passed by Pakistan’s Constituent Assembly declared that “Muslims shall be enabled to order their lives in the individual and collective spheres in accord with the teachings and requirements of Islam as set out in the Holy Quran and the Sunna.”
    In ‘Purifying the Land of the Pure’, a compelling history of Pakistan’s religious minorities, Farahnaz Ispahani argues that this was the first step towards the country’s further Islamisation over the decades. In Pakistan, by law only a Muslim can become president or prime minister.

    Nor do Indian secularists face the ideological challenge faced by their counterparts in Pakistan. The Sangh Parivar’s Hindu nationalism may look upon Muslims and Christians with suspicion, but it lacks both the global organisation and the overarching ambition of Islamism, the quest to order all aspects of the state and society according to the tenets of Islamic orthodoxy.

    Islamists can fall back on vast jurisprudence and relatively recent historical memory to make their case for a state governed by sharia law. Luckily for India, even the most rabid Hindu fanatic does not seek to reorder 21st century life by the ancient laws of Manu.

    All this is for the good, but suggesting that India’s record on minority rights will likely always be better than its western neighbour’s is not really saying very much. Once we get beyond the false question of equivalence, we’re left with an unpleasant truth. In some ways India has already begun to copy some of Pakistan’s worst aspects.

    Take, for instance, impunity for violence against members of a religious minority. A string of high profile lynchings of ordinary Indian Muslims by Hindu cow vigilantes has yet to lead to a single conviction. In some cases, as in the 2015 murder of Mohammad Akhlaq in Uttar Pradesh, powerful politicians have instead demanded an investigation of the victim’s family.

    Or consider the gradual ghettoisation of concerns about minority rights. Increasingly, India’s secularists appear almost as inconsequential as their Pakistani counterparts. They can draw attention to outrages, such as the roadside lynching of dairy farmer Pehlu Khan in Rajasthan this year. But their ability to sway public opinion has withered.

    Chief minister Vasundhara Raje may well receive a thrashing from Rajasthan voters next year. But it won’t be on account of her failing to protect the lives of Pehlu Khan or Ummar Khan, another alleged victim of cow vigilantes, or to swiftly bring their murderers to justice.

    In parts of India, cow vigilantism has come to resemble Pakistan’s notorious blasphemy law. Merely the accusation carries with it the implicit threat of mob violence. Earlier this month, Reuters reported on vigilante gangs in BJP-ruled states that seize cows from Muslims with impunity. Apparently, Prime Minister Narendra Modi’s calls to end cattle-related violence have not worked.
    Given what has come to pass already – with little effective pushback – it’s not hard to imagine things taking an even darker turn.

    Take the term Islamophobia, described by one wag as “a word created by fascists, and used by cowards, to manipulate morons.” A new generation of Hindu activists has begun to actively promote the related term Hinduphobia. While framed as a tool to fight discrimination, it will likely have the same malign impact as its Islamic equivalent – of shutting down critical inquiry and fostering a destructive culture of conspiracy theories and self-pity.

    From here it’s only a short hop, skip and jump to a Hindu version of takfirism, the dangerous Islamist innovation that allows radicals to declare fellow Muslims as apostates. I grew up in an India where a person who seldom visited a temple and was known to enjoy a fine steak was no less a Hindu than anyone else. It’s fair to wonder whether in the promised new India this will remain the case.
    In sum, it’s absurd to claim that India will turn into a Hindu Pakistan. But the readiness of some Hindu nationalists to pilfer the worst ideas from Islamism suggests that fears about India’s trajectory are not entirely misplaced.

    DISCLAIMER : Views expressed above are the author's own.


    0 0




    Cybersecurity is a big concern for nearly every industry. But for the banking sector, that concern is paramount and the arms race to stay ahead of digital criminals requires innovative thinking. That’s why the London-based SWIFT Institute, set up by the Society for Worldwide Interbank Financial Telecommunications to enable cross-learning between academics and bankers,  issued a challenge to teams of Canadian university students to come up with new ideas.

    The winner, Team Pulse OS, devised a process that allows for reliable early detection by analyzing the unique power-use signatures on mobile devices. Team leader Nataliya Mykhaylova, who is pursuing a doctorate in chemical engineering at the University of Toronto, discussed her project with Knowledge@Wharton following her win at the October 2017 competition. Peter Ware, director of the SWIFT Institute also joined the conversation about cybersecurity.

    An edited version of the transcript follows.

    Knowledge@Wharton: What prompted the SWIFT Institute to devise this competition?


    Peter Ware: We launched the SWIFT Institute Student Challenge last year primarily to engage with students. Part of what the institute does is give research grants to academics. We’ve been dealing with academics for about five years now, so we wanted to go beyond that and try and tap into some young, upcoming, engaging minds.

    We linked this specific challenge to a conference that we held in Toronto called Sibos. We thought that we would focus primarily on students at Canadian universities. Before the challenge started, we went to the Canadian banking community and asked, “what is at the forefront of your minds? What is keeping you awake at night that we can try and help you solve?” Unsurprisingly, it was cyber. They helped to find the idea of trying to protect a bank’s channels to its customers from cyber attacks. That’s the challenge that we put to students.

    Knowledge@Wharton: Nataliya, why did you want to be a part of this competition?

    Nataliya Mykhaylova: I was excited to hear about this competition because cybersecurity was something that is really big on everybody’s mind. A lot of the attacks right now are undetected. I have been kind of researching this field from the hardware side. Doing my Ph.D. at the University of Toronto, I was testing different devices and got lots of ideas about how this could be prevented on a hardware level. I was really excited by this competition and thought I would submit my ideas.

    Knowledge@Wharton: Tell us more about your winning idea.

    Mykhaylova: You hear on the news all of these companies that have an issue with cybersecurity. What I noticed when looking through those cases is that there is a lot of effort being put into preventing the attacks, which is understandable. But I noticed there is not quite as much attention being spent on detecting those things early. In fact, only 30% of the cyber security attacks are detected in-house. This is a huge problem. There are lots of creative ways in which those attacks happen, and we need better systems to detect them at the edge or before they have a chance to spread.
    “There are lots of creative ways in which cyber attacks happen, and we need better systems to detect them at the edge or before they have a chance to spread” –Nataliya Mykhaylova
    When I was doing my Ph.D., I was assembling and testing different devices, different sensors. I discovered there is this pattern that you can detect and correct through artificial intelligence models. And you can actually detect the changes in those patterns very early. For example, if the system is compromised even in the early stages, those performance signatures — like heat, CPU, other patterns — change very quickly. You are able to differentiate them from the normal operations of the system. Basically, an attack would leave a series of breadcrumbs as they are compromising the system, so you can detect them before it has really a chance to spread. This was an interesting discovery. This is something that inspired this idea going forward.

    Knowledge@Wharton: Do you give consideration to the fact that so much banking is done on mobile devices?

    Mykhaylova: Yes. The interesting aspect of the system is that it can work across different types of devices. We are checking up on our accounts on our mobile devices all the time — our  laptops, our desktops. You have to have a system that works effectively throughout interfaces so we can detect things before they have a chance to spread through the banking channels. Part of this system is going down to the very low level, to the hardware level.

    With each new version of these devices, they have better and better ICs, the integrated circuits that go into those devices. A lot of them are now able to use features that allow us to run machine-learning models in real time to be able to detect changes in the operation of the systems.

    This is a very interesting area, and I feel that it’s been unexplored. This is something that we have been doing, and realizing that there is a lot of opportunity to explore those parts of the system. Because this is something that is much harder for the cyber attackers to fake, they cannot really change the hardware patterns as easily as they would be able to change the software that is running on the system and to hide their traces.

    Again, this is something that can be deployed running across the devices, so this makes it very powerful to be able to run the script on your cellphone, on your tablet, on your laptops.

    Knowledge@Wharton: Peter, what is the significance of what she is describing?

    Ware: It’s something that is very useful, and quite advanced and different from what I think a lot of banks have been looking at. A lot of the ideas that came from other teams in the challenge were all very good ideas. They were dealing with things such as four-factor authentication, voice and facial recognition. But this was a very unique approach from Nataliya, the idea of looking at pattern or usage recognition on our devices. It’s a novel approach. It’s something that, hopefully, banks can take forward and try to implement.

    Knowledge@Wharton: Has there already been a reaction from banking institutions to the ideas generated by this contest?

    Ware: It was actually the banks that voted on Nataliya to be the winner. We had a panel of four judges, which included some bankers from within Canada and some fintech experts, and we did audience voting online as well. It was the banking community itself that voted on the winner. There was also a lot of engagement among the banks and Nataliya and the other team members. A lot of these ideas are going to be taken forward, I am sure.

    Knowledge@Wharton: Is there any possibility that some of those institutions will get involved in developing this idea?

    Ware: That is something that would happen directly between the banks and Nataliya, so it is something that we are trying to foster. We are trying to foster that engagement and contact between the banks and the students. What happens next is something that is on a direct relationship between the two of them.

    Knowledge@Wharton: Nataliya, can your idea be adapted and applied to sectors beyond banking?

    Mykhaylova: I am really interested in potentially scaling this solution. I am passionate about cybersecurity, and I think banking is a great place to start. But I feel like every day we have new channels through which we interact with the world, and we have new devices in our homes through which we interact. We have IoT devices [internet of things], we talk to Alexa and so on. They are really easy channels for attackers to get into our system. I think we can make pretty much any channel more secure.

    We have already started conversations with some banks in Canada as well as internationally, so I am very fortunate to have been part of the Sibos competition. But there is a lot of interest I received from the IoT technology sector, which is developing these devices that we all have in our homes now. I am quite excited about the interest and potential scalability of this.

    Knowledge@Wharton: The SWIFT Institute will have its 2018 conference in Sydney, Australia. Do you plan to stick with cybersecurity as the theme?
    “I am passionate about cybersecurity, and I think banking is a great place to start.” –Nataliya Mykhaylova
    Ware: We are going to run the Student Challenge again, but we will come up with a different idea. We’ve gone to the Australian banking community and explained the concept of the challenge to them. There is a great deal of excitement there. They are in the midst of coming up with the idea that is relevant to their community. At this point, we don’t know what the idea is. We have already contacted 43 universities across Australia to explain what Sibos is, what the SWIFT Institute is and the idea behind the challenge. There is a great deal of interest from universities.

    Knowledge@Wharton: What are the next steps for you, Nataliya?

    Mykhaylova: Our goal right now is to test this system on all of the possible use cases, finalize the models and launch it through a few partner banking institutions to really showcase the benefits that it could provide.

    As I mentioned, it can be run on any system, it’s fairly low cost and fast to set up, it’s an easy solution to implement, and it could have a higher return on investments for banks. We are looking to finalize the model and launch it by next year.

    Knowledge@Wharton: Banks operate on different systems. Was that a challenge for you in the process of developing this concept?

    Mykhaylova: Yes. Banks have all of the infrastructure right now for various types of divisions and for most internal interactions between the employees as well as with the customers. That was one of the biggest aspects that we wanted to incorporate into this solution so that we could deploy a system at scale to detect issues before they have a chance to spread through the network, which I think is one of the biggest concerns with the recent cases of companies being compromised.

    Ware: Even within a single bank, they have multiple systems. There are so many different mergers and takeovers that have happened over the decades, and they all have these legacy systems that they try and put together. The idea of Nataliya having something that could be relatively easy to implement is going to be music to the banks’ ears. It’s a great initiative.

    Knowledge@Wharton: Do you have to consult with, in this case, the Canadian government for implementation?

    Mykhaylova: To some extent. Currently, this system can be operated across a number of different devices and trained on a number of different systems. Right now we are starting kind of small, really validating on very focused case scenarios. But later as it expands, I do feel that it would be important to involve the government because cybersecurity is going to be key for all of our operations. It would be important to think about it on a larger scale.

    Knowledge@Wharton: As banks have retreated from some places, a vast number of areas are becoming unbanked, and there is a tremendous increase in financial inclusion with some of the fintechs entering the spaces. Is the cybersecurity solution that Nataliya has proposed relevant to those kinds of entities as well?

    Ware: I think it is. You’re absolutely right that the more fintechs open up their systems and create new systems to provide banking services to anyone and everyone around the world, it’s creating more opportunities for cyber attacks. A lot of those smaller fintech companies are not as well regulated, if they’re regulated at all, compared to the banks.

    The security they put in place might not be as good as what the banks have in place. Nataliya’s idea could be very relevant to them, and I think it’s absolutely necessary that a lot of those fintech companies try to adopt as stringent security measures as possible.
    “The people perpetuating cyberattacks actually operate as a business. They buy and sell information from and to each other.” –Peter Ware
    Knowledge@Wharton: Financial institutions may be hesitant to partner with each other, but sharing information would help ensure everyone has a high level of cybersecurity. Do you agree?
    Ware: Absolutely. Looking at how banks can share information is something that we have explored from a research perspective. Banks do share cyber-threat information with each other anyway, but we’re always looking for ways on how that can be improved.

    The people perpetuating cyber attacks actually operate as a business. They buy and sell information from and to each other. From a protection point of view, the banks are increasingly starting to think along those lines as well. The same would be true for any other industry.

    Knowledge@Wharton: Another concern for consumers is the speed in which the information from a breach is relayed to the public. Many within the IT community say time is needed to understand what happened. From that perspective, maybe Nataliya’s solution would speed up this process.

    Ware: Exactly. The earlier that those threats can be detected, the more time that banks and anyone else would have to be able to react to it.

    Mykhaylova: It takes an average of 98 days to detect an attack, sometimes after years. This is very crazy that we still have to spend so much time detecting those things. Part of the reason is that it is also becoming harder and harder to detect. There are new types of malware, new types of zero-day attacks and other threats that are becoming more and more common. So, it’s important to have systems that don’t need to be signature-based, that can detect those kinds of attacks without any prior knowledge of the threat. This is where our system excels, and it can detect patterns in an unsupervised manner. You don’t need to build up those signature libraries ahead of time.

    Knowledge@Wharton: Do you think we will get to a point where potential break-ins are done and figured out in real time?

    Mykhaylova: Yes, so that is the goal. Our system runs in real time, continuously tracking things, categorizing them and evaluating how risky they are. I think that is key to be able to do that in real time.

    View at the original source