A US military assault on North Korea could prompt a last gasp retaliatory nuclear attack by Pyongyang on Seoul and Japan killing as many as 3.8 million people, according to a new analysis of the destructive potential of Kim Jong-un arsenal.
Donald Trump has repeatedly warned that all options remain on the table – including the use of force – in dealing with the North Korean threat.
However, analysts say the US is hamstrung by Pyongyang’s growing weapons stockpile and the fact that major population centres lie well within its range.
A detailed analysis by 38 North, a programme at the US-Korea Institute at Johns Hopkins School of Advanced International Studies, has crunched the numbers, based on North Korea’s likely arsenal of as many as 25 weapons with possible yields of up to 250 kilotons, of the sort tested last month. Using population density figures and blast radius estimates, Michael Zaguruk, calculated that a single detonation over Seoul could kill almost 800,000 people and over Tokyo would kill almost 700,000.
Those numbers rise to a combined total of almost four million dead if North Korea manages to launch multiple rockets carrying its latest thermonuclear bomb. And that is without considering the effects of radioactive fallout.
The numbers are a sobering reminder of the difficulties of dealing with Mr Kim and his nuclear arsenal.
Mr Zaguruk explains the possible measures available to the US and the possible response. “This could include such options as attempting to shoot down the test missiles or possibly attacking North Korea’s missile testing, nuclear related sites, missile deployment areas or the Kim regime itself,” he writes.
“The North Korean leadership might perceive such an attack as an effort to remove the Kim family from power and, as a result, could retaliate with nuclear weapons as a last gasp reaction before annihilation.”
Meanwhile, North Korea continues to press ahead with its nuclear and missile programmes. Fresh reports circulated on Friday that a long-range missile, capable of reaching the west coast of the US, was being prepared for testing.
Anton Morozov, a member of the Russian lower house of parliament's international affairs committee, made the claim after returning from a visit to North Korea.
"They are preparing for new tests of a long-range missile. They even gave us mathematical calculations that they believe prove that their missile can hit the west coast of the United States," he said, according to the RIA news agency.
"As far as we understand, they intend to launch one more long-range missile in the near future. And in general, their mood is rather belligerent."
Stress is a common feeling, and it seems we’re dealing with more of it in our lives than ever before. Work, finances, relationships — all have the ability to create chronic stress that can lead to anxiety, depression and other issues. Two Penn Integrates Knowledge (PIK) professors — Michael Platt, a Wharton marketing professor and professor of neuroscience and psychology at Penn, and Shelley Berger, a Penn professor of cell and development biology and director of The Epigenetics Institute, have teamed up in an interdisciplinary approach to learn more about how stress affects individuals. Platt and Berger recently spoke about their research and its implications on the Knowledge@Wharton show on SiriusXM channel 111. (Listen to the podcast at the top of this page.) Following is an edited transcript of the conversation. Knowledge@Wharton: Tell us the background behind this research.
Shelley Berger: It’s something we’ve been interested in for some time. The research I’ve been doing is focused on epigenetics, which has led into this interest in the environment and how it affects behavior. But in recent years, we’ve become very interested in the brain from work we started on the ant system. We actually have an ant lab at the Perelman School of Medicine. We’ve been interested in ants and studying their behavior. They’ve been a great model for behavior. Recently, we became interested in mouse behavior and studying the brain and epigenetics in the brain. Through conversations with Michael, that led to how this whole thing began on crossing species and studying mouse, primate and even human. Knowledge@Wharton: Give us a better understanding of epigenetics.
Berger: Epigenetics is the study of how the environment affects the way genes are expressed. We commonly think about mutations affecting genes and gene expression. But epigenetics is the study of how the environment can affect the way genes are turned on and off without mutation. Specifically, it’s the effect of how chemicals coming in from the environment through our bodies can sort of decorate the proteins that associate with the DNA and change the way genes are turned on and off. That’s of great interest because many things are coming in from the environment, including stress. We’re talking about psychological effects and stress, and that can lead to changes in the brain. Post-traumatic stress disorder is a great example.
“Burnout is the number-one topic that I hear about from our MBA students, and they want to do how to deal with that…. Stress is everywhere.” –Michael Platt
Knowledge@Wharton: Michael, tell us about your interest in this research?
Michael Platt: We have been interested in behavior for a long time. We got interested in how genes affect the development of the structure and function of the brain and nervous system to ultimately produce behavior. Of course, that equation leaves out the central role of epigenetics that Shelley just described, which is that the same genes in the same body in a different environment can lead to very different outcomes. If you imagine yourself living with lots of family support, for example, you might respond better to the stresses of the environment — to a stock market crash, to problems in the economy, etc. But if you are living alone without any social support, then that stressful event or the long-term stress can be much more impactful on your body and your brain.
Shelley and I met two years ago, when I had just arrived here from Duke University…. As we were talking, we began to realize that we were interested in very similar things, although our expertise is very complementary. We had been working on the neurophysiology of behavior, the neurophysiology of social behavior and how genetics might influence that. But we had really no expertise in epigenetics at all.
Knowledge@Wharton: How will this research lead to a better understanding of the human brain and how it reacts to stress from environmental factors?
Berger: We’ve been studying mouse behavior for the past five years, and you can model stress in mice. There’s a method called “fear conditioning” by which you subject the mouse to stress, and then you can study what happens to the brain. How are learning and memory affected by the stress of fear conditioning?
We had been working with post-mortem human brains from people who had suffered neuro-degeneration from Alzheimer’s Disease, so we’d been developing a lot of methodology to study parts of the brain. Humans are difficult because, of course, they’re not experimental animals. We have to work out all the methods to carry out high-resolution studies of these chemical marks in the brain from human post-mortem samples. Then, starting to get interested in some results we had in mouse, we thought that the crossover between fear conditioning in mouse and PTSD in human could be really interesting. We got in touch with a guy who has a bio-bank of humans who had suffered from PTSD. We can take these methods that we’ve developed in mouse, then fine-tuned in human post-mortem brains through our work on Alzheimer’s, and now we can look at PTSD in humans. Michael filled in the middle part with primate research, which is a great link between the two.
Platt: Right. As useful as a mouse is for understanding a lot of the basic processes that we think are involved in the response of the human brain to stress, sometimes that translation is not perfect. That reflects the fact that mice and humans last shared a common ancestor much longer ago than humans and monkeys. For that reason, the human brain and the monkey brain are much more similar than the human brain and the mouse brain. There are certain aspects of behavior, in terms of behavioral complexity and social complexity, that are shared between humans and monkeys, that are less well-paralleled in the mouse.
The monkey gives us the capability to look at some of these other issues like social support and how that might help with the problems that one encounters with social competition. This is very different from what mice experience. People and monkeys live in groups for a good reason. It allows you to do things that you can’t do alone, like evade predators or chase away other groups and get resources. On the other hand, when you’re living in a group, you have to compete with everybody else. That’s a source of stress that I think we all can easily identify with.
This is a beautiful partnership because it has the potential to lead to understandings that can lead to new treasuries. Also, putting my Wharton hat on, this kind of project can potentially illuminate ways in which we might be able to mitigate stress, say, within the workplace. That’s a huge problem these days. Burnout is the number-one topic that I hear about from our MBA students, and they want to do how to deal with that. You hear the same thing from residents who are training in medicine. Stress is everywhere.
Berger: The beautiful thing about the primates, with respect to humans, is that humans are not an experimental animal — we don’t know exactly what their social situation is in its entirety. But scientists watch [primates] all the time. Michael has scientists working with him, watching these animals in their natural habitat, so they know exactly what sort of place they have in the social spectrum. We can’t know all that about humans. As much as we get metadata about humans — that we can get information about their background — they’re not experimental animals. We can’t control and know.
“We’re not talking about the kinds of stresses our kids go through over smartphones. We’re talking about the kinds of things that really impede your ability to function.” –Shelley Berger
Knowledge@Wharton: That space between mouse and monkey feels like it’s very important because there’s more of a more recent link between monkey and human than human and mouse.
Platt: I think that’s why these studies need to be done. There are very few places where you can really do them and where people will collaborate to try to discover those links and make the connections. I think what Shelley can do in her lab, with really exquisite techniques and technologies that allow you to hone in on the mechanism, we can closely approximate in primates. But it’s not the same kind of thing that we can do.
On the other hand, what we can provide is a much richer understanding of the social environment and all of the other factors that might affect an individual in how they respond to stress and how that ultimately leads to changes in behavior. The particular population of monkeys that we study evolves. This is a population in which individuals are free to fight and to flee and to breed. Some monkeys do better than others. That means that whatever traits those individuals have — traits that allow them to offer stress, to make alliances and connections — can be passed on to the next generation. We can see these kinds of changes in the population over time. That’s almost impossible to see in people.
Knowledge@Wharton: If you find enough links, you’re talking about being able to deal with a lot of prevalent medical issues — burnout, depression, suicide, PTSD.
Platt: It’s really compelling to see the data. There has been a lot of attention paid to a new report that analyzed the incredible increase in depression and anxiety, especially in teenagers, after the introduction of smartphones in the 2000s. There does seem to be, potentially, some link to the kinds of technological environments in which we find ourselves. I think one implication is that it leads to social disconnection, which I think reinforces what we’ve learned both in people and in monkeys that having social support is really critical. If you don’t have it, then things really fall apart.
Knowledge@Wharton: The social support is not as much of a factor when you’re talking about mice. But perhaps it is for ants because of how they live in colonies, correct?
Berger: That’s an astute observation. Ants are a great model for complex social interactions, and there are very few models like that. That’s why the medical school supports us to have an ant lab. They see the great translational aspects of studying ants and their social interactions.
Knowledge@Wharton: How do you think that information could play out when you’re talking about dealing with stress?
Berger: I’ll go back to our mouse research, which we’re translating both ways to the ants and would love to translate it to the monkeys and definitely to humans. We discovered that the machinery to make one of the chemicals that’s placed on the genes to regulate them is associated with the genes. It’s an enzyme that makes this chemical, and that enzyme can be inhibited. We think if we inhibit that enzyme, we can alter learning and memory in mouse. This is the kind of experiment you can do in mouse. We can’t do it in primates because they’re too complicated.
One of the things we’ve been talking about is whether an inhibitor like that could be relevant to humans. We’re not talking about the kinds of stresses our kids go through over smartphones. We’re talking about the kinds of things that really impede your ability to function.
Knowledge@Wharton: Going back to the business aspect, if these discoveries lead to treatments down the road, that could chip away at some pieces of the health care problem in this county. Platt: Sure, and hopefully down the road is not so far down the road. The enzyme that Shelley’s talked about, it’s quite clear that one could imagine developing a drug to target that specific mechanism. If you had a combat veteran who was exposed to a blast or something else incredibly stressful, you could potentially deliver that drug at the right place and the right time and block the formation of bad memories. That would be an incredible opportunity.
“If you had a combat veteran who was exposed to a blast or something else incredibly stressful, you could potentially deliver [a] drug at the right place and the right time and block the formation of bad memories.” –Michael Platt
The other extension of this are ways of trying to mitigate stress before it leads to something. We’re talking about chronic stress rather than acute stress. Are there ways that we can prevent it? There are a lot of options that people are exploring, such as improving your social connections, mindfulness and meditation, exercise. We know those things are all really good. We just don’t know how they work.
Knowledge@Wharton: Is there a difference between people who handle stress really well and those who don’t?
Platt: This is a really tricky and interesting and important question. Is it something you’re born with or something you can teach? There is fabulous work from Penn professor Angela Duckworth on grit, which is this resilience. It seems like some people have a great reservoir of grit, while it’s harder for others to demonstrate it. That leads to the question of whether you can develop it. Can you train it? Can you educate people to display that grit more often? If so, then that’s another approach to dealing with life stress.
Berger: It seems to me that the monkeys are the place to study that because Michael’s group sees which monkeys are accepted.
Platt: One thing that’s really fascinating about the monkeys that we study is that they have personalities, just like people do. The big five personality instruments that you would apply to a person, you could do the same with a monkey. We’ve done that, and their personalities are consistent over time. A monkey who’s very timid and anxious when he’s 6 months old is going to be similarly timid and anxious when he’s an adolescent. A monkey who is very bold or very aggressive, you’ll see those patterns continue. It’s not 100%. It’s not like it’s genetically determined. I think that’s where the fascinating thing is. Maybe 25% or 50% is purely genetic, and the rest is environment in how they respond to it. That’s why this is so important. View at the original source
Shyam's Insights : Behavioral economics, studies the effects of psychological, social, cognitive, and emotional factors on the economic decisions of individuals and institutions,more generally, of the impact of different kinds of behavior, in different environments of varying experimental values.
Behavioral economics doesn't recognise the bounds of rationality, and often times recognises and gives credibility to unbound irrational economic behavioral models. These typically integrate insights from psychology, neuroscience and microeconomic theory; in so doing, these behavioral models cover a range of concepts, methods, and fields. Now the article
Richard H. Thaler, the “father of behavioral economics,” has this week won the 2017 Nobel Prize in Economics for his work in that field. Thaler has long been known for challenging a foundational concept in mainstream economics — namely, that people by-and-large behave rationally when making purchasing and financial decisions. Thaler’s research upended the conventional wisdom and showed that human decisions are sometimes less rational than assumed, and that psychology in general — and concepts such as impulsiveness — influence many consumer choices in often-predictable ways. Once considered an outlier, behavioral economics today has become part of generally accepted economic thinking, in large part thanks to Thaler’s ideas. His research also has immediate practical implications. One of Thaler’s big ideas – his “nudge theory” – suggests that the government and corporations, to take one example, can greatly influence levels of retirement savings with unobtrusive paperwork changes that make higher levels of savings an opt-out rather than an op-in choice. In fact, he co-authored a book, Nudge: Improving Decisions About Health, Wealth and Happiness, which became a best-seller. In this Knowledge@Wharton interview, Katherine Milkman, a Wharton professor of operations, information and decisions — and a behavioral economist herself — discusses Thaler’s influence in economics and the practical applications of his ideas already underway. She attributes part of his success to his great clarity in thinking and in writing. She had interviewed professor Thaler for Knowledge@Wharton in 2016 regarding his then-new book, Misbehaving: The Making of Behavioral Economics.
An edited transcript of the conversation follows.
Milkman: Standard economics makes assumptions about the rationality of all of us, and essentially assumes that we all make decisions like perfect decision-making machines, like Captain Spock from Star Trek who can process information at the speed of light, and crunch numbers, come up with exactly the right solution.
“Humans are not perfectly rational…. We have impulse-control problems, we have social preferences. We care about what happens to other people instead of being entirely selfish.”
In reality, that’s not the way humans make decisions. We often make mistakes. And Richard Thaler’s major contribution to economics was to introduce a series of predictable ways that people make errors, and to make it acceptable to begin modeling those kinds of deviations to make for a richer and more accurate description of human behavior in the field of economics.
Knowledge@Wharton: What would be a classic example of a decision that an economist would expect someone to make rationally, but in fact they don’t?
Milkman: Well, a great example from Richard’s own work relates to self-control challenges. And he has talked about the cashew problem, or the challenge, if you’re at a dinner party, of resisting the bowl of cashews that you know will spoil your dinner.
A traditional economist would expect that’s not a challenge. No one should have any difficulty withstanding that temptation. They should know it will spoil their dinner; we don’t need the cashews. And Thaler noted that, in fact, everyone struggles with this, and everyone breathes a sigh of relief when a host puts away that bowl of cashews so they’re not reachable and they’re not in front of everyone anymore.
It seems small, but it actually highlights a major challenge for humans with self-control, which can perhaps explain the obesity epidemic, and under-saving for retirement, the under-education among many groups. The range of things that this simple observation can begin to shed light on is just extraordinary. And that’s only one of his contributions.
Knowledge@Wharton: It’s this idea that human beings happen to be impulsive a lot of times, and that should be taken into account. They aren’t sitting there with calculators all the time figuring out an economic decision or a financial decision.
Milkman: That’s exactly right. That’s the contribution that Richard Thaler made to economics in a nutshell: that humans are not perfectly rational, sitting there with calculators. We have impulse-control problems, we have social preferences. We care about what happens to other people instead of being entirely selfish. We are limited in our rationality in a number of ways, and he has pointed that out over the last 50 years, and highlighted opportunities for policy makers to improve the lives of billions of people by taking these insights into account.
Knowledge@Wharton: It appears a little odd that these ideas were consigned to the corner for so long. Now people are talking about them more.
Milkman: I think that’s right. At some level it took a personality like Richard Thaler; he’s someone who likes to break the mold and misbehave, which is the title of his autobiography. It took someone like that to point out the absurdity of the assumptions in a standard economic model, and help change the assumptions so that we could start doing the science better.
Knowledge@Wharton: And those standard models, they worked really well a lot of the time, maybe even most of the time — it’s just that when they didn’t work, it could be a major failing. Is that right?
Milkman: I think that’s right. And it also meant there was an opportunity for improvement. So even if they were working fairly well much of the time, they weren’t actually fully accurate. And so the more accurate we can make them, the more opportunities we have to make better policy and so on.
Knowledge@Wharton: Let’s talk about some of the practical applications of his ideas. Thaler was a government advisor not long ago. Perhaps you could tell us about his contributions and about how he has a lot of practical ideas for how his concepts can be put to use.
Knowledge@Wharton: Let’s talk about some of the practical applications of his ideas. Thaler was a government advisor not long ago. Perhaps you could tell us about his contributions and about how he has a lot of practical ideas for how his concepts can be put to use.
Let me give you a really concrete example from that book that I think is incredibly powerful. He points out that whenever we walk into a cafeteria, we’re faced with a wide range of options about what to put on our tray. Something comes first, something comes last, and the first thing we encounter is much more likely to be the thing we purchase and eat than the last thing, because we have an empty tray when we encounter that very first option.
“It took a personality like Richard Thaler … to point out the absurdity of the assumptions in a standard economic model.”
What this means is that whoever laid out the cafeteria was actually, whether or not they meant to, influencing our choices dramatically depending on where they place certain foods. The first thing we encounter is much more likely to end up on our plate, as I just said, and therefore whatever they place first, whether it was broccoli or chocolate cake, was more likely to end up on our tray.
There’s no such thing as neutral choice architecture. Thaler pointed out that we should try to architect environments where people are making decisions in a way that, in his words, nudges us towards better choices. So why not put the broccoli first and the chocolate cake last in order to help people be healthier in a cafeteria?
Thaler also talks a lot about how to improve retirement savings outcomes using similar understandings of psychology. For instance, why not assume that people want to save for retirement and allow them to opt out rather than what was historically typically done when you signed up or started working at a new employer, which was to assume people didn’t want to enroll unless they said please sign me up for the retirement savings program. With small changes [in] the environments where we make choices, that don’t restrict choice in any way … we can have a huge impact on human life for the better.
Knowledge@Wharton: Another interesting idea — along the same lines — is that you agree in advance that when you get a raise in the future, a bigger chunk of that would go into your retirement than just the standard percentage based on what you had chosen in advance. It turns out through the “miracle” of compounding interest that these things can make a huge difference at retirement.
Milkman: That’s right. And you had specifically asked about how governments were using this. I also want to note many folks in governments read the book Nudge, and there are now literally hundreds of offices in governments around the world that have developed what they lovingly refer to as Nudge Units, where they’re trying these insights from this field to try to improve outcomes for citizens.
And we have one in the U.S. government, we have one that was founded I believe in 2015 if I’m getting my dates right. And before that, the very first Nudge Unit came in the U.K. under David Cameron, and it was literally referred to as the Nudge Unit. Now it’s called the Behavioral Insights Team and they have operations in the U.S. and in the UK. They’re helping many cities in the U.S. improve their outcomes for citizens. And so he’s just had an enormous impact, not only here but abroad.
Knowledge@Wharton: Thaler won the Nobel Prize in Economics for his work in behavioral economics, but as we were talking earlier you noted he considers himself a behavioral scientist. Can you talk about the distinctions there?
Milkman: One of the things that is important about Richard Thaler’s work is that it bridges disciplines, and so while many economic Nobel Prizes are awarded to people who are truly only economists and only recognized in economics, some go to people who have impacted a far wider range of fields, and this is one of those.
So Richard Thaler often refers to himself not only as a behavioral economist but as a behavioral scientist, because there’s a community that includes many who aren’t economists who are doing this work that is spurred by his ideas, his thinking about peculiarities of human behavior that aren’t captured by economic science.
So behavioral science is a broader term. It includes psychologists, many folks in business schools who don’t have an identity as a psychologist or an economist. You can find the stray neuroscientists and sociologists who think of themselves as behavioral scientists as well.
Knowledge@Wharton: It’s interesting that there’s the word “behavioral” in here, and “psychology.” I don’t hear the word “emotion,” when it would appear that that is part of it all. We talk about emotional intelligence — is that somehow connected to this idea? That also seems to be an area that is slightly outside of the strictly rational, and it applies to behavior, and it is talked about oftentimes in the work setting.
Milkman: That’s a great question. I think that emotions specifically haven’t been exactly the center of Richard’s work, but at some level they are an underpinning of all behavioral science, and all of behavioral economics, because if you fundamentally ask where do these deviations from optimal decision making come from, many are driven by emotions.
So a lot of Richard’s work looking at social preferences — for instance, the fact that we intrinsically seem to care about other people’s outcomes and not only our own — is fundamentally the result of emotion. We emotionally care about other people; we have an emotional reaction when we see something happening that we think is unfair to someone else.
“The very first Nudge Unit came in the U.K. under David Cameron, and it was literally referred to as the Nudge Unit.”
You can also think about an emotional reaction, or a visceral reaction leading to impulse control problems in many situations, and his work on self-control then is all about emotions. So while he doesn’t typically get recognized for being a scholar of emotions, at some level everything we have learned about limited rationality is somehow connected to emotions it seems.
Knowledge@Wharton: So tell me some of the ways that he has influenced many other researchers, including yourself.
Milkman: Well he opened up new fields of inquiry that really weren’t in existence before he began doing this work. I personally study self-control and nudging, and those are two things that were not really being studied by the community of behavioral scientists in nearly the same way, not with the same lens, before he came along and made them central to behavioral economics and created this field, along with his predecessor, Daniel Kahneman, who was also a Nobel laureate roughly 15 years ago. Thaler has been instrumental in opening up doors for young scientists to think about things that previously weren’t talked about by rigorous academics.
Knowledge@Wharton: What are some of the things you are looking at that you might not have looked at if you hadn’t had that influence in your life?
Milkman: Well one of my areas is looking at something I call the Fresh Start Effect. We’ve done research showing that at the beginning of new cycles in our lives, like the start of a new year would be a very obvious one to think about, but also the start of a new week, following birthdays, we have renewed self-control and extra motivation to pursue our goals.
And we find that people visit the gym at a higher rate at the beginning of these new cycles, for instance, and they’re more likely to search the term “diet” on Google at the start of these new cycles, and they’re more likely to create goal contracts on goal-setting websites. And that draws directly on Richard Thaler’s work, pointing out that we don’t treat time and money as if it is simply all the same and fungible; we actually use what he calls “mental accounts.”
So we think of time as having these categories, or money as having these categories, and we don’t move money around between the categories — or move time around. So a new year is a new account, it’s a new category, and we treat it differently. When we have that new year, in my work we show that it feels like a fresh start — we feel like all our failings from last year, that’s a separate category, it’s behind us.
And Richard has used this mental accounting theory to explain lots of anomalies in the way people engage with their personal finances among other things. So that’s an example of something that influenced my work.
Knowledge@Wharton: Regarding Thaler’s work, I read that, for example, if you create something called a heating account in your personal budget, you end up spending more on heating. How does one influence the other?
Milkman: The idea is that we treat money as if it is labeled. So say you get a gift certificate — this is the study I actually did in graduate school — to use at the grocery store where you shop for groceries every week. Say it’s for $10. Well you’re just $10 richer overall in all of life, right, because you were going to spend at least $10 at the grocery store next week anyway, since you go there every week. But because you label money, instead of feeling like, “Oh, I have $10 dollars for whatever I want this week; I can go to the movies or out for lunch an extra time,” we feel like that money is labeled for groceries and we act richer in our grocery account. We actually go splurge and buy things like seafood that we wouldn’t normally buy instead of just buying whatever extra thing would make us happier in life.
So it’s a labeling phenomenon, when money comes in in one place, we think of it as only usable in that one place in spite of the fact that traditional economics would say we should recognize all money as totally fungible. It’s just another $10 in your pocket.
Knowledge@Wharton: What haven’t I asked you about Richard Thaler that would be important for people to understand?
Milkman: I think one of the most amazing things about Richard is how well he writes, and how simple his insights about human behavior are, and easy for anyone to appreciate. He’s actually the first scholar of behavioral economics whom I read when I was a graduate student actually studying computer science and business. I picked up a wonderful collection of his essays in a book called The Winner’s Curse about anomalies and the way that economic agents behave.
I was immediately captivated because it was so incredibly simple and elegant, and funny and true, and I think many of the scholars who have been influenced by him wouldn’t have been as influenced if it weren’t for his incredible ability to communicate in that way. So for anyone listening and anyone thinking about being either a scholar or a communicator in other ways, it just emphasizes the importance of clear, simple writing, and clear, simple examples to have a huge impact on the world.
Knowledge@Wharton: Is there any other kind of theory, or set of theories or ideas, out there that is emerging — that people are thinking about — that could be parallel to behavioral economics and that probably will turn out to be important, but people just don’t get it yet?
Milkman: Well, one of Richard Thaler’s disciples — and his disciples are all incredibly impressive in their own right — is Sendhil Mullainathan, an economist at Harvard who thinks the next big thing is how machine learning will change social science. And I think he’s on to something; I think that could be the next revolution in the social sciences — using machine learning to better predict everything.
Knowledge@Wharton: So we’re heading to a future of algorithms, I guess.
Milkman: Well, certainly a future where algorithms do more to help social science.
A zero-index waveguide stretches a wave of light infinitely long, creating a constant phase throughout the wire.(Image courtesy of Second Bay Studios/Harvard SEAS)
In 2015, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) developed the first on-chip metamaterial with a refractive index of zero, meaning that the phase of light could be stretched infinitely long. The metamaterial represented a new method to manipulate light and was an important step forward for integrated photonic circuits, which use light rather than electrons to perform a wide variety of functions. Now, SEAS researchers have pushed that technology further – developing a zero-index waveguide compatible with current silicon photonic technologies. In doing so, the team observed a physical phenomenon that is usually unobservable — a standing wave of light.
The research is published in ACS Photonics. The Harvard Office of Technology Development has filed a patent application and is exploring commercialization opportunities.
“We were able to observe a breath-taking demonstration of an index of zero."
When a wavelength of light moves through a material, its crests and troughs get condensed or stretched, depending on the properties of the material. How much the crests of a light wave are condensed is expressed as a ratio called the refractive index — the higher the index, the more squished the wavelength.
When the refractive index is reduced to zero the light no longer behaves as a moving wave, traveling through space in a series of crests and troughs, otherwise known as phases. Instead, the wave is stretched infinitely long, creating a constant phase. The phase oscillates only as a variable of time, not space.
This is exciting for integrated photonics because most optical devices use interactions between two or more waves, which need to propagate in sync as they move through the circuit. If the wavelength is infinitely long, matching the phase of the wavelengths of light isn’t an issue, since the optical fields are the same everywhere.
But after the initial 2015 breakthrough, the research team ran into a catch-22. Because the team used prisms to test whether light on the chip was indeed infinitely stretched, all of the devices were built in the shape of a prism. But prisms aren’t particularly useful shapes for integrated circuits. The team wanted to develop a device that could plug directly into existing photonic circuits and for that, the most useful shape is a straight wire or waveguide.
The researchers — led by Eric Mazur, the Balkanski Professor of Physics — built a waveguide but, without the help of a prism, had no easy way to prove if it had a refractive index of zero. Then, postdoctoral fellows Orad Reshef and Philip Camayd-Muñoz had an idea.
Usually, a wavelength of light is too small and oscillates too quickly to measure anything but an average. The only way to actually see a wavelength is to combine two waves to create interference.
Imagine strings on a guitar, pinned on either side. When a string is plucked, the wave travels through the string, hits the pin on the other side and gets reflected back — creating two waves moving in opposite directions with the same frequency. This kind of interference is called a standing wave.
Reshef and Camayd-Muñoz applied the same idea to the light in the waveguide. They “pinned-down” the light by shining beams in opposite directions through the device to create a standing wave.
The individual waves were still oscillating quickly but they were oscillating at the same frequency in opposite directions, meaning at certain points they canceled each other out and other points they added together, creating an all light or all dark pattern. And, because of the zero-index material, the team was able to stretch the wavelength large enough to see.
This may be the first time a standing wave with infinitely-long wavelengths has ever been seen.
Real-time, unprocessed video of standing waves of light in a 15-micrometer-long, zero-index waveguide taken with an infrared camera. The perceived motion is caused by atmospheric disturbances to the free- standing fibers that couple light onto the chip, changing the relative phase between the two incoming beams. Credit: Harvard SEAS “We were able to observe a breath-taking demonstration of an index of zero,” said Reshef, who recently accepted a position at the University of Ottawa. “By propagating through a medium with such a low index, these wave features, which in light are typically too small to detect directly, are expanded so you can see them with an ordinary microscope.”
“This adds an important tool to the silicon photonics toolbox,” said Camayd-Muñoz. “There's exotic physics in the zero-index regime, and now we're bringing that to integrated photonics. That's an important step, because it means we can plug directly into conventional optical devices, and find real uses for zero-index phenomena. In the future, quantum computers may be based on networks of excited atoms that communicate via photons. The interaction range of the atoms is roughly equal to the wavelength of light. By making the wavelength large, we can enable long-range interactions to scale up quantum devices.”
The paper was co-authored by Daryl I. Vulis, Yang Li and Marko Loncar, Tiantsai Lin Professor of Electrical Engineering at SEAS. The research was supported by National Science Foundation and was performed in part at the Center for Nanoscale Systems (CNS).
Crying is a natural response humans have to a range of emotions, including sadness, grief, joy, and frustration. But does crying have any health benefits? It is not unusual to cry, and both sexes cry more than people may assume. In the United States, women cry an average of 3.5 times per month and men cry an average of 1.9 times a month. This article explores why people cry and what health benefits crying may have.
Crying is a natural response to emotions or irritants like dust in the eyes.
Humans produce three types of tears:
Basal: The tear ducts constantly secrete basal tears, which are a protein-rich antibacterial liquid that help to keep the eyes moist every time a person blinks.
Reflex: These are tears triggered by irritants such as wind, smoke, or onions. They are released to flush out these irritants and protect the eye.
Emotional: Humans shed tears in response to a range of emotions. These tears contain a higher level of stress hormones than other types of tears.
When people talk about crying, they are usually referring to emotional tears.
Benefits of crying
People may try to suppress tears if they see them as a sign of weakness, but science suggests that doing so could mean missing out on a range of benefits. Researchers have found that crying:
1. Has a soothing effect
Self-soothing is when people:
regulate their own emotions
calm themselves
reduce their own distress
A 2014 study found that crying may have a direct, self-soothing effect on people. The study explained how crying activates the parasympathetic nervous system (PNS), which helps people relax.
2. Gets support from others
As well as helping people self-soothe, crying can help people get support from others around them. As this 2016 study explains, crying is primarily an attachment behavior, as it rallies support from the people around us. This is known as an interpersonal or social benefit.
3. Helps to relieve pain
Research has found that in addition to being self-soothing, shedding emotional tears releases oxytocin and endorphins. These chemicals make people feel good and may also ease both physical and emotional pain. In this way, crying can help reduce pain and promote a sense of well-being.
4. Enhances mood
Crying may help lift people's spirits and make them feel better. As well as relieving pain, oxytocin and endorphins can help improve mood. This is why they are often known as "feel good" chemicals.
5. Releases toxins and relieves stress
When humans cry in response to stress, their tears contain a number of stress hormones and other chemicals. Researchers believe that crying could reduce the levels of these chemicals in the body, which could, in turn, reduce stress. More research is needed into this area, however, to confirm this.
6. Aids sleep
A small study in 2015 found that crying can help babies sleep better. Whether crying has the same sleep-enhancing effect on adults is yet to be researched. However, it follows that the calming, mood-enhancing, and pain-relieving effects of crying above may help a person fall asleep more easily.
7. Fights bacteria
Crying helps to kill bacteria and keep the eyes clean as tears contain a fluid called lysozyme. A 2011 study found that lysozyme had such powerful antimicrobial properties that it could even help to reduce risks presented by bioterror agents, such as anthrax.
8. Improves vision
Basal tears, which are released every time a person blinks, help to keep the eyes moist and prevent mucous membranes from drying out.
As the National Eye Institute explains, the lubricating effect of basal tears helps people to see more clearly. When the membranes dry out, vision can become blurry.
When to see a doctor
Crying has a number of health benefits, but frequent crying may be a sign of depression.
Crying in response to emotions such as sadness, joy, or frustration is normal and has a number of health benefits.
However, sometimes frequent crying can be a sign of depression. People may be depressed if their crying:
happens very frequently
happens for no apparent reason
starts to affect daily activities
becomes uncontrollable
Other signs of depression include:
having trouble concentrating, remembering things, or making decisions
feeling fatigued or without energy
feeling guilty, worthless, or helpless
feeling pessimistic or hopeless
having trouble sleeping or sleeping too much
feeling irritable or restless
not enjoying things that were once pleasurable
overeating or undereating
unexplained aches, pains, or cramps
digestive problems that do not improve with treatment
If a person is experiencing symptoms of depression, or someone they know is, then they should talk to a doctor. Should a person feel suicidal, or know someone who is feeling that way, they should call:
emergency services
Takeaway
Crying is a normal human response to a whole range of emotions that has a number of health and social benefits, including pain relief and self-soothing effects.
However, if crying happens frequently, uncontrollably, or for no reason, it could be a sign of depression. If this is the case, it is a good idea to speak to a doctor.
A Fatigue Cost Calculator reveals that a U.S. employer with 1,000 workers can lose about $1.4 million dollars annually due to costs associated with exhausted workers.
Sleep disorders and sleep deficiency are hidden costs that affect employers across the U.S. Seventy percent of Americans admit that they routinely get insufficient sleep, and 30 percent of U.S. workers and 44 percent of night-shift workers report sleeping less than six hours a night. In addition, an estimated 50 million–70 million people have a sleep disorder, often undiagnosed. In total, the costs attributable to sleep deficiency in the U.S. were estimated to exceed $410 billion in 2015, equivalent to 2.28 percent of the gross domestic product.
Analysis of existing data, using a new Fatigue Cost Calculator developed through the Sleep Matters Initiative at Brigham Health for the National Safety Council (NSC), reveal that a U.S. employer with 1,000 workers can lose about $1.4 million dollars each year in absenteeism, diminished productivity, health care costs, accidents, and other occupational costs associated with exhausted employees, many of whom have undiagnosed and untreated sleep disorders.
Introduced at the NSC Congress and Expo, the Fatigue Cost Calculator is free online. Employers can use it to determine how much money a tired workforce costs their business by entering specific data — including workforce size, industry, and location — to predict the prevalence of sleep deficiency and common sleep disorders among their employees. Using an algorithm generated by integrating information from sleep science literature and publicly available government data, the calculator can estimate both the prevalence of employee sleep deficiency and the resulting financial loss.
It also estimates the savings that might be expected from implementation of a sleep health education program that includes screening for untreated sleep disorders, such as obstructive sleep apnea and insomnia.
“We estimate that the costs of fatigue in an average-sized Fortune 500 company consisting of approximately 52,000 employees is about $80 million annually,” said Matthew Weaver, a scientist with the Brigham Health Sleep Matters Initiative who helped develop the calculator.
The mission of the Sleep Matters Initiative, led by investigators from Brigham Health and Harvard Medical School, is to improve treatment of sleep and circadian disorders in order to improve health, safety, and performance, and to promote change in social norms around sleep health.
“Promotion of healthy sleep is a win-win for both employers and employees, enhancing quality of life and longevity for workers while improving productivity and reducing health care costs for employers,” said Charles A. Czeisler, director of the Division of Sleep and Circadian Disorders at Brigham and Women’s and Baldino Professor of Sleep Medicine at Harvard Medical School.
“Additionally, occupational fatigue-management programs can increase knowledge of sleep disorders, educate participants on the impact of reduced alertness due to sleep deficiency, and teach fatigue countermeasures, as well as screen for untreated sleep disorders.” Other findings revealed by the Fatigue Cost Calculator include:
A national transportation company with 1,000 employees likely loses more than $600,000 a year because of tired employees. Motor vehicle crashes are the leading cause of workplace deaths, underscoring the need for alert, attentive employees.
More than 250 employees at a 1,000-worker national construction company likely have sleep disorders, which increase the risk of being injured or killed on the job. The construction industry has the highest number of on-the-job deaths each year.
A single employee with obstructive sleep apnea can cost an employer more than $3,000 a year in excess health care costs.
An employee with untreated insomnia is present but not productive for more than 10 full days of work annually, and accounts for at least $2,000 in excess health care costs.
An average Fortune 500 company could save nearly $40 million a year if half of its workforce engaged in a sleep-health program.
“This research reinforces that sleepless nights hurt everyone,” said Deborah A.P. Hersman, president and CEO off the National Safety Council. “Many of us have been conditioned to just power through our fatigue, but worker health and safety on the job are compromised when we don’t get the sleep we need. The calculator demonstrates that doing nothing to address fatigue costs employers a lot more than they think.”
Development of the Fatigue Cost Calculator was supported by a contract from the National Safety Council to the Brigham and Women’s Physicians Organization.
Smokestacks belching hazardous gases, rivers so polluted they catch fire, workers in identical overalls turning bolts with wrenches: For many Americans, the word “manufacturing” conjures up negative, old-fashioned images. Or, we think of it as something that takes place in less-developed nations, as has increasingly been the case. Many have said that factories will continue to locate wherever the work can be done most cheaply, despite political messaging about bringing back manufacturing jobs.
Manufacturing accounts for about 13% of the U.S. economy. Should we even focus on trying to “bring it back,” now that information and services — the “knowledge economy” — seems a more promising path? Andrew Liveris firmly believes we should. In fact, he said in a recent talk at the University of Pennsylvania that manufacturing is essential to our knowledge economy, and to America’s competitiveness on the global stage.
Liveris is the executive chairman of DowDuPont, a $73 billion holding company (the two giant chemical companies merged in September), and Chairman and CEO of The Dow Chemical Company. He has advised both the Obama and Trump administrations on manufacturing issues. (Liveris was head of Trump’s now-defunct American Manufacturing Council.)
The author of Make It in America: The Case for Reinventing the Economy, in which he writes that America’s economic growth and prosperity depends upon a strong manufacturing sector, Liveris was interviewed at Penn’s Perry World House during Penn Global Week by Wharton School Dean Geoffrey Garrett. Garrett referred to Liveris “the cheerleader of advanced manufacturing.” A Key Difference Garrett stated that President Trump has been talking about bringing U.S. economic growth back up to the level it was before the 2008 Great Recession. Since World War II the economy has typically increased about 4% a year, but in 2016, the economy grew just 1.6%. What would it take to see those higher numbers again, he asked?
Liveris commented that the very nature of growth has changed dramatically because human civilization is going through “one of its every-few-hundred-years massive tipping point,” due to digitization. He said this phenomenon was as disruptive as Ford’s introduction of mass-produced cars in the horse-and-carriage era. This tipping point is causing enormous dislocation, including the elimination of jobs and the loss of meaningful work. Moreover, he said, “the job of 20 or 30 years ago is paying less — wage rates are down and all of that — so there are a lot of unhappy and angry people out there.”
And America is under-prepared, including from a policy point of view, he said. Liveris talked about the profound implications for business leaders as the forces of globalization collide with the forces of digitization. He said most corporations are not yet nimble enough to re-design themselves to accommodate these trends.
Yet, he said, substantial economic expansion is possible. “In the immediate term, can we get 3.5% growth in this country? You bet we can,” said Liveris. He noted that instituting policies to spur foreign direct investment would help as they did in the Clinton and Reagan eras. He also cited tax reform, infrastructure spending and business deregulation as important factors. He added, though, that the U.S. currently has “a massive, massive issue in how our government is functioning,” so change is not likely to happen overnight.
According to Liveris, there is a widespread lack of understanding among the public of what today’s manufacturing — which he referred to as advanced manufacturing — actually consists of. (Definitions vary, but the OECD defines advanced manufacturing technology as computer-controlled or micro-electronics-based equipment used to make products.) Liveris stated, “We are generating a new wave of technology to generate a knowledge economy. And a knowledge economy will need things made. They’ll just be made differently.”
Advanced manufacturing might include making smartphones, solar cells for roofs, batteries for hybrid cars, or innovative wind turbines. Liveris said he had visited a DowDuPont factory the previous week that is working on advanced compasses to enable wind turbines with blades the size of football fields. The goal is to produce blades light and efficient enough to make wind power a viable reality. “That’s technology. That’s advanced manufacturing,” he said.
“In the immediate term, can we get 3.5% growth in this country? You bet we can.”
He asked the audience to envision “a knowledge economy based on the collision and intersection of the sciences.” Those who think the tech revolution is only about “the Facebooks and the Googles, connectivity and all that,” are dead wrong, he said.
Not Enough Work, or Not Enough Workers? Doesn’t the use of more robotics and automation lead to job loss? Garrett asked. Or, is the problem that workers aren’t appropriately skilled to fill new kinds of jobs? Liveris said he was firmly in the second camp. “I have job openings now at Dow and at DuPont that I can’t get the skills for. And engineering jobs open.”
He elaborated that the way machines provide insights is changing, and noted, “We humans will have to read those insights. I can’t [find] enough of those humans. That’s the issue we’re dealing with in this country.”
Liveris said that 7.5 million technology jobs left America between 2008 and 2016 because the country wasn’t supplying appropriate candidates. The reaction of many businesses was to re-locate to “the Chinas, the Indias, the places that were supplying that sort of skill.” In the United States right now, he said, there are half a million technology jobs open, but American educational institutions are only graduating roughly between 50,000 and 70,000 candidates per year, so there’s a “massive under-supply.” In the next three years, there will be 3.5 million jobs created, and Liveris said the U.S. might only be able to fill about 1.5 million of them through a combination of graduation and immigration. “Unless immigration is fooled with, which is a whole other issue.”
According to Liveris, a critical reason for America to revive its manufacturing sector is to promote innovation. “Something that we at Dow and many of us in manufacturing know: If you have the shop floor, if you make things, you have the prototype for the next thing, so you can innovate.” Conversely, if you stop making those things, your R&D diminishes dramatically, he said.
Liveris said that 7.5 million technology jobs left America between 2008 and 2016 because the country wasn’t supplying appropriate candidates.
The U.S. should be incentivizing the technologies that America is good at, said Liveris. Everybody knows about Silicon Valley, Liveris noted, but fewer know that the U.S. is prominent in advanced sensors, which are critical to the progress of the Internet of Things (IoT) sector. Other areas in which America stands out are lightweight compasses and 3D printing. He noted that technologies like these have been developed at various institutions “in a somewhat haphazard way, which is very American. That’s great. That’s creativity.” But, he said, shouldn’t we as a country double down on the things we do best and become the world leader?
Liveris called advanced manufacturing “the best path for the United States” and said, “We’re so naturally suited for it if we’d just get the policies to help us.” He believes that the U.S. should already be at the most advanced layer of economic development based on technology. “We have cheap money, we’ve got skills, we’ve got low-cost energy. We should be having an investment boom in this country,” he said.
He noted, though, that we have created barriers to investment that are preventing this from happening. Borrowing an expression from Indian Prime Minister Narendra Modi, Liveris said that there were two kinds of countries in the world: red tape countries (hampered by bureaucracy and over-regulation) and red carpet countries (welcoming to investors). The U.S. has unfortunately become a red tape country, he said.
He called investment “the biggest job creator out there,” and stated that Germany for example has figured out how to do this. “It’s the poster child of investment in Europe.” China, too, has mastered it, and “other countries who want to trade with the United States are mastering it because they incentivize it.”
“If you have the shop floor, if you make things, you have the prototype for the next thing, so you can innovate.”
Closing America’s Education Gap A big proponent of STEM education, Liveris said that American schools are not graduating the workers we need. “We have convinced ourselves that a four-year college degree of the skills we used to have in the last century is what we should still keep producing.” He said that re-tooling American education needs to happen immediately, with STEM education incorporated at every level including elementary school.
Liveris said DowDuPont is funding a STEM-dedicated school, in conjunction with Michigan State University, in Dow Chemical’s home base of Midland, Michigan. The school will offer curricula for kindergarten through 12th grade, with MSU course offerings for college students, according to the Michigan news site MLive.
The pilot school will also provide teacher enrichment programs. Liveris said that American teachers need to be better trained and rewarded. “We do something very bad in this country, which is we don’t celebrate teachers at the elementary, middle and high school level. We should be putting them on pedestals. And giving them the skills to teach STEM.”
Image: Shyam's Imagination Library Happiness is in short supply at work these days. Deadlines, staff shortages, productivity pressures and crazy stress push even the most talented and temperate people to want to quit their jobs. But that’s not a realistic option, even for folks in the C-suite. Annie McKee, director of the Penn CLO and Medical Education programs at the University of Pennsylvania where she teaches leadership and emotional intelligence, has a better idea. In her book, How To Be Happy At Work, she outlines three requirements that workers need to feel more fulfilled on the job. McKee spoke about the concepts in her book on the Knowledge@Wharton show on SiriusXM channel 111. (Listen to the podcast at the top of this page.) The following is an edited transcript of the conversation. Knowledge@Wharton: How many people do you think are not happy at work?
Annie McKee: I don’t think we even have to guess. Gallup has been studying people for years, and upwards of two-thirds of us are either neutral, which means we don’t care, or we’re actively disengaged. Disengagement and happiness go hand in hand, so an awful lot of people are not happy at work. Unhappy people don’t perform as well as they could. When we’re negative, cynical, pessimistic, we simply don’t give our all, and our brains don’t work that well just when we need people’s brains to be working beautifully.
Knowledge@Wharton: Has this problem ramped up in the last two decades or so? As much as digital is phenomenal for us, a lot of people feel under pressure because of what digital does to accelerate change.
McKee: The world is changing at a rapid pace, obviously. As much as we love our always-connected world, it can mean that we work all of the time. We’re always one minute away from that next email that’s going to bring tragedy or crisis to our working lives. Some of us never turn it off, and that’s not good for us.
Knowledge@Wharton: Where did your idea for the book come from?
McKee: I’ve worked in organizations all over the world for decades now. I’ve looked at leadership practices, emotional intelligence, culture and all of those things that impact the bottom line and people’s individual effectiveness. I decided to take another look and see what people were trying to tell us. All of these studies that we did around the world were practical studies. People were telling us, “I want to be happy, I want to be fulfilled, I want to love my job, I’m not as happy or as fulfilled as I could be, and here is what I need.” And then they went on to tell us what they need.
Knowledge@Wharton: Are executives aware of their employees’ problems? Are they also aware that they may susceptible to this?
“Unhappy people don’t perform as well as they could.”
McKee: It doesn’t matter where you sit in the organization, you are susceptible to disengagement and unhappiness even at the very top. We think if you’re making all of that money and you’ve got all of that power and that great job, it’s going to be perfect. The best leaders in our organizations, at the very top and all the way down to the shop floor, understand that people matter, feelings matter, and it’s job number one to create a climate where people feel good about what they’re doing where they’re happy, engaged and ready to share their talents.
Knowledge@Wharton: What are the key ingredients to finding that happiness?
McKee: From my work, I’ve discovered three things. Number one, people feel that they need to have impact on something that is important to them, whether it’s people or a cause or the bottom line. They need to feel that their work is purposeful, and it’s tied to values that they care about. Number two, we need to feel optimistic that our work is tied to a personal vision of the future. The organization’s vision isn’t enough. As good as it may be, we have to know that what we’re doing ties to a personal vision of our future.
Number three, we need friends at work. We’ve learned over the course of our lives you shouldn’t be friends with people at work, that it’s dangerous somehow, that it will cloud your judgment. I don’t agree. I think we need to feel that we are with our tribe in the workplace, that we belong, that we’re with people that we respect and who respect us in return. We need warmth, we need caring, and we need to feel supported.
Knowledge@Wharton: I would think most people looking for a job, whether they are coming out of college or shifting careers mid-life, are looking for that area that would make them happy. When you have that expectation of being in the right sector to begin with, you hope that you have the happiness to go along with it.
McKee: We do hope that we get into the right organization and there’s a good fit between our values and the organization’s values. We really try hard. But we get in there and the pressures of everyday life, and the crises and the stress can really tamp down our enthusiasm and our happiness.
Also, a lot of us are susceptible to what I call happiness traps. We end up doing what we think we should do. We take that job with that fancy consulting firm or that wonderful organization not because we love it and not because it’s a fit, but because we think we should. Frankly, some of us have ambition that goes into overdrive. Ambition is a great thing, until it’s not.
Knowledge@Wharton: Is that part of the reason why we see more people who have been with a company for 20 years, 25 years and suddenly pivot? They may be going to work for a nonprofit. You see these stories popping up, especially with people in the C-suite.
McKee: You do see that. You see senior leaders all of a sudden saying, “Enough is enough, I [want to do] something different.” But I really want to be clear, you don’t always have to run away. In fact, you want to run towards something. If you feel you’re not happy in the workplace, quitting your job is probably not the first answer, and some of us can’t. What we need to do is figure out what we need, what we want, how to have impact, what will make us feel hopeful about our future, what kind of people we want to work with and for, and then go find that either in our organization or elsewhere.
Happiness starts inside each of us. It’s tempting to blame that toxic boss or that horrible organizational culture, and those things may be true. But if you want to be happy at work, you first have to look inside and ask what is it that you want? What will make you feel fulfilled? Which happiness traps have you fallen prey to? And get yourself out.
Knowledge@Wharton: What are the happiness traps?
McKee: There’s what I call the “should” trap. We do what we think we should do. We show up to work acting like someone we’re not. That is soul-destroying, and it’s fairly common. [There’s also] the “ambition” trap. When our ambition drives us from goal to goal and we don’t even stop to celebrate the accomplishment of those goals, something is wrong.
Some of us feel helpless, stuck. The “helplessness” trap may be the most serious of all. It’s really hard to get out of because we don’t feel we have any power. My message is we have a lot more power and control over not only our attitude but what we do and how we approach our work on a daily basis and in the long term than maybe we think we do.
“Ambition is a great thing, until it’s not.”
Knowledge@Wharton: Earlier in your life, you found yourself fitting into these patterns as well. McKee: I did. Early in my life I wasn’t teaching in a wonderful institution like Penn. I didn’t even have what you would call a professional career. I had jobs like waiting tables and cleaning houses and taking care of elderly people. I was making ends meet. And it wasn’t easy.
I had two choices, I could either say to myself this is miserable and I hate it, or I could look for something that was fulfilling in what I did. I tried to do that. I did find aspects of my job, whether it was cleaning houses and feeling like I was doing a good job or finding a mentor in some of these workplaces, that really made it worthwhile to me.
Knowledge@Wharton: Do you have to be 100% happy all of the time? I think if you can find areas of happiness, it can make your job or your life so much easier to go through. McKee: Happiness isn’t just about feeling good every moment of the day, and it’s not just about pleasure. That’s hedonism, and we’re not seeking that. Frankly, a little bit of stress is a good thing. It pushes us to be innovative and to do things differently and to push harder. So, it’s not about just feeling good. But we do need a foundation of purpose, hope and friendships. We do need to know that what we do matters at work, that we are doing something that is tied to our future, and that the people we work with are great.
Knowledge@Wharton: You mentioned taking the time to recognize your accomplishments, but there are companies that want you to push on to the next project. They don’t give you the opportunity to slow down even for an hour to enjoy it.
McKee: Most of our organizations are really hard-driving, especially publicly traded organizations. I’m not even sure they’re that different than other institutions these days. The pressure is on everywhere, and the reality is we do move from project to project, goal to goal. What choices can we make in the middle of that culture? We don’t have to be victims of our organizational culture, and we don’t have to be victims of that bad boss you might have or maybe you’ve had in the past. We can make choices about what we do with our time, our energy and our emotional stance.
Knowledge@Wharton: Going back to the friends component in the workplace, does it matter where those friends come from within the structure of the company? A lot of people say you have to be careful if you want to try to be friends with the boss.
McKee: It doesn’t matter where your friends are, but it does matter whether or not you have your eyes open and recognize what people are thinking about how you are behaving and who you are friends with. You’ve got to be aware of your organization’s culture and the rules of the road. If you’re violating some of those rules — for example, going up the hierarchy and building friendships with people who are a couple levels above you or maybe in another division — you need to understand what the implications of that are. And you need to be maybe a little bit careful.
Knowledge@Wharton: How does the middle manager deal with this?
McKee: Middle managers get it from all sides. They are pulled in every direction, and it is probably the hardest job in any organization. They, more than anybody, need to hear this message. Life is too short to be unhappy at work. Middle managers have a tremendous impact on the people who work for them, and recognizing that you more than anybody are the creator and the curator of the culture in the organization is an important place to start.
Knowledge@Wharton: Sometimes managers forget about the life people have outside of work. McKee: We’re here at the Wharton School, and we’ve been studying management now for over 100 years. Some of the early approaches to managing organizations are really destructive, and one of the aspects of that early research has been the attitude that people don’t matter and that private lives ought to be left at the door of the office. It’s impossible to leave our private lives at the door of the office. It doesn’t mean that we talk about it all of the time, but we bring our experiences with us and we bring our feelings with us. Managers need to recognize that.
It’s also hard to find what is commonly called work-life balance. By the way, I don’t like that phrase. I think it’s a myth. I don’t think there is any magic formula that says if we get it just right we’re going to be happy at work and happy at home. It’s more about understanding that the lines are blurred between work and home now, and we need to learn how to manage our choices and our attention.
Knowledge@Wharton: What about those who work remotely and can feel very isolated and disconnected?
McKee: I understand the isolation and feeling kind of left out. The reality is that it takes a lot more effort to build relationships when we work remotely. We need to take time. When we’re working remotely, we get on the phone, we do the work that needs to be done, we talk about the project, and we get off the phone. That leaves us feeling kind of empty. We need to take that extra five minutes to have a chat, have a laugh, feel like we are in a relationship with somebody. It takes effort and self-management because the temptation is to just do the work. You talk about the gig economy, right? We’re all sort of working in a portfolio manner these days. We take on this bit of work and that bit of work, and much of it is virtual.
“Life is too short to be unhappy at work.”
I think we need to figure this out because the bottom line is that we have not changed as human beings. We still need to feel like we belong, we need to feel that we’re cared for, and we need to be able to care for others in return. If we’re working far away, we’ve got to take extra time and make a concerted effort to build those relationships in a different kind of way than if we’re in person. I’m a big proponent of working from home or working remotely. I think it’s really helpful to individuals and companies. People who are able to work at home feel trusted, and when you feel trusted you are more committed to your organization. A lot of people report being able to get more done away from the office because you don’t have the interruptions. The downside is that you have to find a way to keep the relationships fresh and alive because that’s as important as getting that project done.
Knowledge@Wharton: Companies seem to be more aware of employee happiness than they used to be, which is a good thing. Do you think we’re going to continue down that path?
McKee: Companies are more aware, so are enlightened CEOs and enlightened leaders. I think we will continue down the path for the following reasons. It’s not just nice-to-have, and it’s not just about feeling good. We’ve got solid research coming out of positive psychology, neuroscience and management that tells us that feelings matter. When we feel good, we’re smarter. And we need smart employees now. We need people who are committed, who are engaged. The research is pretty clear. Happiness before success. If we want our employees to be at their best, we need to care about their emotional well-being as well as their physical well-being.
Does the human ability to innovate suggest an immunity to total extinction?
Yes and no. Currently, innovation reduces our chance of extinction in some ways, and increases it in others. But if we innovate cleverly, we could become just about immune to extinction.
The species that survive mass extinctions tend to share three characteristics.They're widespread. This means local disasters don't wipe out the entire species, and some small areas, called refugia, tend to be unaffected by global disasters. If you're widespread, it's more likely that you have a population that happens to live in a refugium.
They're ecological generalists. They can cope with widely varying physical conditions, and they're not fussy about food.
They'rer-selected. This means that they breed fast and have short generation times, which allows them to rapidly grow their populations and adapt genetically to new conditions.
Innovation gives humans the ability to be widespread ecological generalists. With technology, we can live in more diverse conditions and places than any other species. And while we can't (currently) grow our populations rapidly like anr-selected species, innovation does allow us to adapt quickly at the cultural level.
Technology also increases our connections to one another and connectivity is a two-edged sword. Many species consist of a network of small, local populations, each of which is somewhat isolated from the others. We call this ametapopulation. The local populations often go extinct, but they are later re-seeded by others, so the metapopulation as a whole survives.
Humans used to be a metapopulation, but thanks to innovation, we're now globally connected. Archaeologists believe that many past civilizations, such as the Easter Islanders, fell because of unsustainable ecological and cultural innovations. The impact of these disasters was limited because these civilizations were small and disconnected from other such civilizations.
These days, a useful innovation can spread around the world in weeks. So can a lethal one. With many of the technologies and chemicals we're currently inventing, we can't be certain about their long-term effects; human biology is complex enough that we often can't be absolutely certain something won't kill us in a decade until we've waited a decade to see. We try to be careful and test things before they're released, and the probability that any particular invention could kill us all is tiny, but since we're constantly innovating, it's a real possibility.
Pandemics pose the same problem for a well-connected species. There are certain possibilities where species extinction is really hard to avoid; fortunately, they're also very unlikely, but we are definitely not immune from this.
The most likely cause of our extinction, in my opinion, is innovation in machine learning/AI. This could destroy the planet, but even if it doesn't, humans will be ultimately redundant to the dominant systems. They might keep us alive in a zoo somewhere, but I doubt it. A happier scenario (to me at least) is transhumanism, where humans become extinct in a sense because we've managed to liberate ourselves from biology.
So how could innovation prevent our extinction?We seed the galaxy with independently evolving human populations to create a new metapopulation. These local populations would hopefully be sufficiently isolated that some would survive an innovation or disaster that wipes out the rest. They would, of course, evolve in response to local conditions, perhaps creating several new species. So you could say this is still extinction, but it's as close as we'll come to persistence in our ever-changing universe.
Indian Oil Corporation Limited (IOCL) invites Application for the post of 45 Junior Engineering Assistant on contract basis at Mathura Refinery, Uttar Pradesh. Apply Online before 31 October 2017. Official website is iocl.com – Qualification/ eligibility conditions, how to apply & other rules are given below…
Advt. No. : MR/HR/RECT/JEA(ALL INDIA)/2017
IOCL Job Details :
Post Name : Junior Engineering Assistant
No of Vacancy : 45 Posts
Pay Scale : Rs. 11900-32000/-
Discipline wise Vacancy :
Chemical : 15 Posts
Electrical : 07 Posts
Mechanical : 13 Posts
Instrumentation : 09 Posts
Fire & Safety : 01 Post
Eligible Criteria for IOCL Recruitment :
Educational Qualification : 3 years Diploma in Electrical/Mechanical/Instrumentation/Instrumentation & Electronics / Instrumentation and Control Engineering from a recognized Institute/University OR 3 years Diploma in Chemical/Refinery & Petrochemical Engg. Or BSc (Maths, Physics, Chemistry or Industrial Chemistry) from a recognized Institute/University.
Age Limit : Minimum & Maximum age limit is 18 to 26 years as on 31.10.2017
IOCL Selection Process : Selections will be based on Written Test and a Skill/Proficiency/Physical Test(SPPT).
Application Fee : General and OBC candidates have to pay Rs.150/- though Online mode using either Debit/Credit Card or through Net-Banking only. SC/ST/PwD/ExSM candidates are exempted from payment of application fee.
How to Apply IOCL Vacancy : Interested candidates may apply Online through the website https://www.iocl.com form 09.10.2017 to 31.10.2017. Candidates may also send hard copy of Online application along with self attested copies of all supporting documents by ordinary post to DGM(HR), HR Dept, Administration Building, Mathura Refinery, Mathura, Uttar Pradesh-281005 on or before 07.11.2017. Important Dates to Remember :
Starting Date for Submission of Online Application : 09.10.2017
Last Date for Submission of Online Application : 31.10.2017
Last Date for Submission of Hard Copy of Online Application : 07.11.2017
US officials have been given a stark warning about the potential dangers of a nuclear electromagnetic pulse (EMP) bomb triggered by reclusive North Korea .
According to experts, such a blast could end up killing 90% of Americans indirectly by knocking out the power grid and all electrical devices within the blast radius.
Dr. William R. Graham and Dr. Peter Vincent Pry from the EMP Commission outlined to the US House of Representatives the dangers faced by a detonation - which is when a hydrogen bomb is detonated at an altitude of between 30 and 400km above a target. Such a weapon would knock out things like refrigeration for food storage, electrical lights and communication and water processing.
"With the development of small nuclear arsenals and long-range missiles by new, radical U.S. adversaries, beginning with North Korea, the threat of a nuclear EMP attack against the U.S. becomes one of the few ways that such a country could inflict devastating damage to the United States," the pair warned in a written statement .
"It is critical, therefore, that the U.S. national leadership address the EMP threat as a critical and existential issue, and give a high priority to assuring the leadership is engaged and the necessary steps are taken to protect the country from EMP."
Dr. Graham, a former science advisor to president Reagan and Dr. Pry, a former CIA officer, urged president Trump to prepare for a possible EMP strike.
They also warned that North Korea's weaponry is becoming more of an issue as the reclusive nation continues to schedule ICBM missile tests.
"The EMP Commission finds that even primitive, low-yield nuclear weapons are such a significant EMP threat that rogue states, like North Korea, or terrorists may well prefer using a nuclear weapon for EMP attack, instead of destroying a city."
The higher an EMP bomb is detonated, the wider the range of destruction.
At 400km (250 miles), an EMP bomb would be just under the orbit of the International Space Station and the resulting detonation would be enough to affect the majority of the US mainland.
Featured excerpt from WTF? What’s the Future and Why It’s Up to Us by Tim O’Reilly
If you’re an entrepreneur or aspiring to become one, Tim O’Reilly is the kind of mentor you should try to enlist. He’s been there and done that in the New Economy since, well, pretty much since there’s been a New Economy.
O’Reilly started writing technical manuals in the late 1970s, and by the early 1980s, he was publishing them, too. His company, O’Reilly Media Inc. (formerly O’Reilly R. Associates), based in Sebastopol, California, helped pioneer online publishing, and in the early 1990s, it launched the first web portal, Global Network Navigator, which AOL acquired in 1995.
Since then, O’Reilly has been an active participant in a host of developments from open source to Gov 2.0 to the maker movement. He is founding partner of San Francisco-based O’Reilly AlphaTech Ventures LLC, an early stage venture investor, and he sits on a number of boards, including Code for America Labs Inc., PeerJ, Civis Analytics Inc., and Popvox Inc. He has also garnered a huge Twitter following @timoreilly.
In his new book, WTF?, O’Reilly takes issue with the vogue for disruption. “The point of a disruptive technology is not the market or competitors that it destroys. It is the new markets and the new possibilities that it creates,” he writes. “I spend a lot of time urging Silicon Valley entrepreneurs to forget about disruption, and instead to work on stuff that matters.” In the following excerpt, edited for space, O’Reilly shares “four litmus tests” for figuring out what that means to you. 1. Work on something that matters to you more than money. Remember that financial success is not the only goal or the only measure of achievement. It’s easy to get caught up in the heady buzz of making money. You should regard money as fuel for what you really want to do, not as a goal in and of itself.
Whatever you do, think about what you really value. If you’re an entrepreneur, the time you spend thinking about your values will help you build a better company. If you’re going to work for someone else, the time you spend understanding your values will help you find the right kind of company or institution to work for, and when you find it, to do a better job.
Don’t be afraid to think big. Business author Jim Collins said that great companies have “big hairy audacious goals.” Google’s motto, “access to all the world’s information,” is an example of such a goal.
There’s a wonderful poem by Rainer Maria Rilke that retells the biblical story of Jacob wrestling with an angel, being defeated, but coming away stronger from the fight. It ends with an exhortation that goes something like this: “What we fight with is so small, and when we win, it makes us small. What we want is to be defeated, decisively, by successively greater beings.”
The most successful companies treat success as a by-product of achieving their real goal, which is always something bigger and more important than they are. Former Google executive Jeff Huber is chasing this kind of bold dream of using technology to make transformative advances in health care. Jeff ’s wife died unexpectedly of an aggressive undetected cancer. After doing everything possible to save her and failing, he committed himself to making sure that no one else has that same experience. He has raised more than $100 million from investors in the quest to develop an early-detection blood test for cancer. That is the right way to use capital markets. Enriching investors, if it happens, will be a by-product of what he does, not his goal. He is harnessing all the power of money and technology to do something that today is impossible. The name of his company — Grail — is a conscious testament to the difficulty of the task. Jeff is wrestling with the angel.
2. Create more value than you capture. It’s pretty easy to see that a financial fraud like Bernie Madoff wasn’t following this rule, and neither were the titans of Wall Street who ended up giving out billions of dollars in bonuses to themselves while wrecking the world economy. But most businesses that prosper do create value for their community and their customers as well as themselves, and the most successful businesses do so in part by creating a self-reinforcing value loop with and for others. They build or are part of a platform on which people who don’t work directly for them can build their own dreams.
Investors as well as entrepreneurs must be focused on creating more value than they capture. A bank that loans money to a small business sees that business grow, perhaps borrow more money, hire employees who make deposits and take out loans, and so on. An investor who bets on the future of an unproven technology can do the same. The power of this cycle to lift people out of poverty has been demonstrated for centuries.
If you’re succeeding at the goal of creating more value than you capture, you may sometimes find that others have made more of your ideas than you have yourself. It’s OK. I’ve had more than one billionaire (and an awful lot of start-ups who hope to follow in their footsteps) tell me how they got their start with a couple of O’Reilly books. I’ve had entrepreneurs tell me that they got the idea for their company from something I’ve said or written. That’s a good thing.
Look around you: How many people do you employ in fulfilling jobs? How many customers use your products to make their own living? How many competitors have you enabled? How many people have you touched who gave you nothing back?
3. Take the long view. The musician Brian Eno tells a story about the experience that led him to conceive of the ideas that led to the Long Now Foundation, a group that works to encourage long-term thinking. In 1978, Brian was invited to a rich acquaintance’s housewarming party, and as the neighborhood his cab drove through became dingier and dingier, he began to wonder if he was in the right place. “Finally [the driver] stopped at the doorway of a gloomy, unwelcoming industrial building,” he wrote. “Two winos were crumpled on the steps, oblivious. There was no other sign of life in the whole street.” But he was at the right address, and when he stepped out on the top floor, he discovered a multimillion-dollar palace.
“I just didn’t understand,” he said. “Why would anyone spend so much money building a place like that in a neighborhood like this? Later I got into conversation with the hostess. ‘Do you like it here?’ I asked. ‘It’s the best place I’ve ever lived,’ she replied. ‘But I mean, you know, is it an interesting neighborhood?’ ‘Oh — the neighborhood? Well ... that’s outside!’ she laughed.”
In the talk many years ago where I first heard him tell this story, Brian went on to describe the friend’s apartment, the space she controlled, as “the small here,” and the space outside, full of winos and derelicts, as “the big here.” He went on from there, along with others, to come up with the analogous concept of the Long Now. We need to think about the long now and the big here, or one day our society will enjoy neither.
It’s very easy to make local optimizations, but they eventually catch up with you. Our economy has many elements of a Ponzi scheme. We borrow from other countries to finance our consumption, and we borrow from our children by saddling them with debt, using up nonrenewable resources, and failing to confront great challenges in income inequality, climate change, and global health.
Every new company trying to invent the future has to think long-term. What happens to the suppliers whose profit margins are squeezed by Walmart or Amazon? Are the lower margins offset by higher sales or do the suppliers faced with lower margins eventually go out of business or lack the resources to come up with innovative new products? What happens to driver income when Uber or Lyft cuts prices for consumers in an attempt to displace competitors? Who will buy the products of companies that no longer pay workers to create them?
It’s essential to get beyond the idea that the only goal of business is to make money for its shareholders. I’m a strong believer in the social value of business done right. We should aim to build an economy in which the important things are a natural outcome of the way we do business, paid for in self-sustaining ways rather than as charities to be funded out of the goodness of our hearts. Whether we work explicitly on causes and the public good, or work to improve our society by building a business, it’s important to think about the big picture, and what matters not just to us, but to building a sustainable economy in a sustainable world.
4. Aspire to be better tomorrow than you are today. I’ve always loved the judgment of Kurt Vonnegut’s novel Mother Night:“We are what we pretend to be, so we must be careful about what we pretend to be.” This novel about the postwar trial of a Nazi propaganda minister who was secretly a double agent for the Allies should serve as a warning to those (politicians, pundits, and business leaders alike) who appeal to people’s worst instincts but console themselves with the thought that the manipulation is for a good cause.
But I’ve always thought that the converse of Vonnegut’s admonition is also true: Pretending to be better than we are can be a way of setting the bar higher, not just for ourselves but for those around us.
People have a deep hunger for idealism. The best entrepreneurs have the courage that comes from aspiration, and everyone around them responds to it. Idealism doesn’t mean following unrealistic dreams. It means appealing to what Abraham Lincoln so famously called “the better angels of our nature.”
That has always been a key component of the American dream: We are living up to an ideal. The world has looked to us for leadership not just because of our material wealth and technological prowess, but because we have painted a picture of what we are striving to become. If we are to lead the world into a better future, we must first dream of it. View at the original source
With the boom in digital technologies, the world is producing over 2.5 exabytes of data every day. To put that into perspective, it is equivalent to the memory of 5 million laptops or 150 million phones. The deluge of data is forecast to increase with the passing day and with it has increased the need for powerful hardware that can support it.
This hardware advancement refers to faster computing or processing speed and larger storage systems. Companies worldwide are investing in powerful computing with the R&Ds constantly in the race for making improved processors. The current stream of data needs computers that can perform complex calculations within seconds.
Big data and Machine learning have pushed the limits of current IT infrastructure for processing large datasets effectively. This has led to the development of a new and exciting paradigm of quantum computing that has the power to dramatically increase the speed. But before that, let us understand the current technology and the need for quantum technology.
Current Computing Technology and Its Limitations
The technology of processing has come a long way in the past couple of years with the development of finger-nail sized microprocessors (single-chip computer packed with millions of transistors) called integrated circuits. Standing true to Moore’s law, the number of transistors packed in a single chip has doubled every 18 months since the past 50 years. Today, it has reached 2 billion transistors in one chip.
The semiconductor technology is now making smallest chips with 5 nanometer-sized gates below which it is said the transistor will not work. Now, the industry has simply started increasing the number of processor “cores” so that the performance continues on Moore’s law predictions. However, there come many other software-level restraints to keep this relevant.
In 2016, two researchers at Lawrence Berkeley National Laboratory created the world’s smallest transistor with gate size of one nanometer. This is a phenomenal feat in computing industry but making a chip with billions of such transistors is going to face many challenges. The industry has already prepared for transistors to stop shrinking further and Moore’s law is likely to come to a stagnant halt.
As the computations pertaining to current applications like big data processing or intelligent systems get more complex, there is a need for higher and faster computing capabilities than the current processors can supply. This is one of the reasons why people are looking forward to quantum computing.
What is Quantum Computing
Quantum Computing merges two great scientific revolutions of this century: computer science and quantum physics. It has all the elements of conventional computing like bits, registers, gates, etc. but on the machinery level, it does not depend on boolean logic. The quantum bits are called qubits. The conventional bits can store 0 or 1 but quantum bits can store 0, 1 and all the possible values (states) between it simultaneously. As it can store the values, it can also process them simultaneously. It can work in parallel doing multiple things at the same time which makes it million times faster than the current computers.
The working of these computers is little complex and the entire field of quantum computing is still largely abstract and theoretical. The only thing we really need to know is that qubits are stored by atoms or other particles like ions that exist in different states and can be switched between these states.
Application in Big Data
The progress in these fields critically relies on processing power. The computational requirement of big data analytics is currently placing a considerable strain on computer systems. Since 2005, the focus has been shifted to parallelism using multiple cores instead of a single fast processor. However, many problems in big data cannot be solved simply by using more and more cores. Splitting up the work among multiple processors is used but its implementation is complex. The problems need to be solved sequentially where the preceding step is equally important.
At the Large Hadron Collider (LHC) at CERN, Geneva particles are accelerated, traveling at almost the speed of light within a 27km ring such that 600 million collisions take place in a second wherein only one of the 1 million collisions chosen for preselection. In the preselection process, only 1 out of 10,000 events are passed to a grid of processor cores that further choose 1 out of 100 possible events, hence, making the data process at 10GB/s. At LHC, 5 trillion bits of data is captured every second and after discarding 99% of the data, it still analyses 25 petabytes of data a year!
Such is the power of quantum computing but the current resources make the application of it in big data, a thing of the future. If it were possible, the computing would be useful for specific tasks such as factoring large numbers that are useful in cryptography, weather forecasting, searching through large unstructured datasets in a fraction of the time to identify patterns and anomalies, etc. The developments in quantum computing could actually make encryption obsolete in a jiffy. With such computing powers, it would be one day possible to make large datasets that would probably store complete information such as – genetic of every single human that existed and machine learning algorithms could find patterns in the characteristics of these humans while also protecting the identities of the humans. Also, clustering and classification of data would become a much faster task.
Looking Forward
The initial results and developments in quantum technologies are encouraging. In the last fifteen years, quantum computers have grown from 4-qubits to 128 qubits. Google’s 5-qubit computer has demonstrated certain basic calculations; that if scaled up, can perform many complex calculations that will make the quantum computing dream come true one day. However, we are unlikely to see such computers for years or even decades.
The future says quantum computers will allow faster analysis and integration of our enormous data sets which will improve and transform our machine learning and artificial intelligence capabilities.
New project to use podcasts, video to illuminate bias, improve decision-making.
When it comes to some of the most important decisions we make — how much to bid for a house, the right person to hire, or how to plan for the future — there is strong scientific evidence that our brains play tricks on us. Luckily, Mahzarin Banaji has a solution: Understand how your mind works so that you can learn to outsmart it.
The Richard Clarke Cabot Professor of Social Ethics and chair of the Department of Psychology is launching a new project — dubbed Outsmarting Human Minds — aimed at using short videos and podcasts to expose hidden biases and explore ways to combat them.
“The behavioral sciences give us insights into what gets in the way of reaching our professional goals, of being true to our own deepest values,” Banaji said. “The science is not new, but its message is still one most people have difficulty grasping and understanding.”
Banaji and research fellow Olivia Kang, with funding from PricewaterhouseCoopers (PwC) and a grant from Harvard’s Faculty of Arts and Sciences, developed Outsmarting Human Minds as a way to deliver up-to-date thinking about hidden biases in an engaging way.
“Everyone wants to know what’s happening in their minds, and they want to know what they can do to make better decisions,” Kang said. “The science is out there; the challenge is getting it to the public in a way that captures their interest.”
The impetus for the project came in part from Banaji’s perspective as a senior adviser on faculty development to Edgerley Family Dean of the Faculty of Arts and Sciences Michael D. Smith.
Speaking of that role, Banaji said, “I try to expose what the mind sciences have taught us about how we make decisions. The hope is that the faculty will put this information to use … in decisions about how to imagine the future of their disciplines.”
Banaji has taught on decision-making to any number of organizations, including corporations, nonprofits, and the military. Questions about how to confront hidden biases are common.
“I want to put the science in the hands of people — or rather, in the heads of people — and have them ask: How can I outsmart my own mind? How can I be the person I want to be?”
She emphasized that watching a video or listening to a podcast isn’t enough to address hidden bias. “Learning brings awareness and understanding. It cannot itself put an end to the errors we make,” she said. “To achieve corrections that will matter to society, we must learn to behave differently.”
Said Kang: “We want to deliver this information to people in a way that doesn’t make them feel that they’re a bad person if they have these biases. The fact is, we all do. This is about acknowledging that hidden biases are a product of how we’re wired and the culture we live in. And then agreeing that we want to do something about it — that we can use this knowledge to improve the decisions we make in life and at work.”
Just as artificial intelligence is helping doctors make better diagnoses and deliver better care, it is also poised to bring valuable insights to corporate leaders — if they’ll let it.
Image Credit : Shyam's Imagination Library
At first blush, the idea of artificial intelligence (AI) in the boardroom may seem far-fetched. After all, board decisions are exactly the opposite of what conventional wisdom says can be automated. Judgment, shrewdness, and acumen acquired over decades of hard-won experience are required for the kinds of complicated matters boards wrestle with. But AI is already filtering into use in some extremely nuanced, complicated, and important decision processes.
Consider health care. Physicians, like executives and board members, spend years developing their expertise. They evaluate existing conditions and deploy treatments in response, while monitoring the well-being of those under their care.
Today’s medical professionals are wisely allowing AI to augment their decision-making. Intelligent systems are enabling doctors to make better diagnoses and deliver more individualized treatments. These systems combine mapping of the human genome and vast amounts of clinical data with machine learning and data science. They assess individual profiles, analyze research, find patterns across patient populations, and prioritize courses of action. The early results of intelligent systems in health care are impressive, and they will grow even more so over time. In a recent study, physicians who incorporated machine-learning algorithms in their diagnoses of metastatic breast cancer reduced their error rates by 85%. Indeed, by understanding how AI is transforming health care, we can also imagine the future of how corporate directors and CEOs will use AI to inform their decisions.
Complex Decisions Demand Intelligent Systems
Part of what’s driving the use of AI in health care is the fact that the cost of bad decisions is high. That’s the same in business, too: Consider that 50% of the Fortune 500 companies are forecasted to fall off the list within a decade, and that failure rates are high for new product launches, mergers and acquisitions, and even attempts at digital transformation. Responsibility for these failures falls on the shoulders of executives and board members, who concede that they’re struggling: A 2015 McKinsey study found that only 16% of board directors said they fully understood how the dynamics of their industries were changing and how technological advancement would alter the trajectories of their company and industry. The truth is that business has become too complex and is moving too rapidly for boards and CEOs to make good decisions without intelligent systems.
We believe that the solution to this complexity will be to incorporate AI in the practice of corporate governance and strategy. This is not about automating leadership and governance, but rather augmenting board intelligence using AI. Artificial intelligence for both strategic decision-making (capital allocation) and operating decision-making will come to be an essential competitive advantage, just like electricity was in the industrial revolution or enterprise resource planning software (ERP) was in the information age.
For example, AI could be used to improve strategic decision-making by tracking capital allocation patterns and highlighting concerns — such as when the company is decreasing spending on research and development while most competitors are increasing investment — and reviewing and processing press releases to identify potential new competitors moving into key product markets and then suggesting investments to protect market share. AI could be used to improve operational decision-making by analyzing internal communication to assess employee morale and predicting churn, and by identifying subtle changes in customer preference or demographics that may have product or strategy implications.
The Medical Model: Advances That Have Enabled AI in Health Care
What will it take for boards to get on board with AI supplements? If we go back to the health care analogy, there have been three technological advances that have been essential for the application of AI in the medical field:
The first advance is an enormous body of data. From the mapping of the human genome to the accumulation and organization of databases of clinical research and diagnoses, the medical world is now awash in vast, valuable new sources of information.
The second advance is the ability to quantify an individual. Improvements in mobile technology, sensors, and connectivity now generate extraordinarily detailed insights into an individual’s health.
The third advance is the technology itself. Today’s AI techniques can assimilate massive amounts of data and discern relevant patterns and insights — allowing the application of the world of health care data to an individual’s particular health care situation. These techniques include advanced analytics, machine learning, and natural language processing.
As a result of the deployment of intelligent systems in health care, doctors can now map a patient’s data, including what they eat, how much they exercise, and what’s in their genetics; cross-reference that material against a large body of research to make a diagnosis; access the latest research on pharmaceuticals and other treatments; consult machine-learning algorithms that assess alternative courses of action; and create treatment recommendations personalized to the patient.
Three Steps Companies Can Take to Bring AI Into the Boardroom
A similar course will be required to achieve the same results in business. Although not a direct parallel to health care, companies have their own components — people, assets, history — which could be called the corporate genome. In order to effectively build an AI system to improve corporate decision-making, organizations will need to develop a usable genome model by taking three steps:
Create a body of data by mapping the corporate genome of many companies and combine this data with their economic outcomes
Develop a method for quantifying an individual company in order to assess its competitiveness and trajectory through comparison with the larger database; and
Use AI to recommend a course of action to improve the organization’s performance — such as changes to capital allocation.
Just as physicians use patient data to create individualized medical solutions, emerging intelligent systems will help boards and CEOs know more precisely what strategy and investments will provide exponential growth and value in an increasingly competitive marketplace. Boards and executives with the right competencies and mental models will have a real leg up in figuring out how to best utilize this new information. While technology is growing exponentially, leaders and boards are only changing incrementally, leaving many legacy organizations further and further behind. It’s time for leaders to courageously admit that, despite all their years of experience, AI belongs in the boardroom. View at the original source
We’re all intelligent in multiple and varying ways, and we can grow those intelligences, too....
People have a wide range of capacities. What if, instead of asking, “How smart am I?” we encouraged kids to ask, “How am I smart?”
Here, we provide an overview of that work on intelligence — along with ways that educators can bring these ideas into their own classrooms.
Intelligence is Multiple
What if, instead of asking, “How smart am I?” we encouraged kids to ask, “How am I smart?” People have a wide range of capacities, and there are many ways to be smart. In his foundational work on multiple intelligence theory, educational psychologist and Project Zero pioneer Howard Gardner has identified eight distinct intelligences:
Verbal
Logical/mathematical
Bodily-kinesthetic
Musical
Spatial
Interpersonal
Intrapersonal
Naturalistic
Everyone possess all of these intelligences, but we also each have unique strengths and weaknesses. Some people have strong verbal and musical intelligence but weak interpersonal intelligence; others may be adept at spatial recognition and math but have difficulty with bodily-kinesthetic intelligence. And everyone is different; strength in one area does not predict strength in any other. These intelligences can also work together. Different tasks and roles usually require more than one type of intelligence, even if one is more clearly highlighted. Furthermore, we can exhibit our intelligences through our ideas, creations, and performances — but test scores do not necessarily measure any sort of intelligence. For educators, the lesson here is that students learn differently, and express their strengths differently. “If we all had exactly the same kind of mind and there was only one kind of intelligence, then we could teach everybody the same thing in the same way and assess them in the same way and that would be fair,” Gardner has said. “But once we realize that people have very different kinds of minds, different kinds of strengths … then education, which treats everybody the same way, is actually the most unfair education.”
Intelligence is Learnable
These multiple intelligences are not fixed or innate. They’re partially the result of our neural system and biology, but they also develop through our experiences and through our ability to persist, imagine, and reflect. Learning expert Shari Tishman and her Project Zero colleagues have highlighted seven key critical thinking mindsets that can set us up to effectively learn and think in today’s world:
Being broad and adventurous
Wondering, problem finding, and investigating
Building explanations and understandings
Making plans and being strategic
Being intellectually careful
Seeking and evaluating reasons
Being metacognitive
By embracing these mindsets, we can actually shape and cultivate our intelligences. For example, being open-minded and careful in our thinking, as opposed to being closed-minded and careless, can be predictive of flexing and growing our intelligences. View at the original source
Exactly how asthma begins and progresses remains a mystery, but a team of Harvard Medical School researchers has uncovered a fundamental molecular cue that the nervous system uses to communicate with the immune system, which may potentially trigger allergic lung inflammation leading to asthma. Their insights into this neuro-immune cross talk are published Sept. 13 in Nature.
“Our findings help us understand how the nervous system is communicating with the immune system, and the consequences of it,” said co-senior author Vijay Kuchroo, the HMS Samuel L. Wasserstrom professor of neurology and senior scientist at Brigham and Women’s. The team included researchers at Harvard Medical School, Brigham and Women’s Hospital, and the Broad Institute of MIT and Harvard Kuchroo is also an associate member of the Broad and the founding director of the Evergrande Center for Immunologic Diseases of HMS and Brigham and Women’s.
“What we’re seeing is that neurons in the lungs become activated and produce molecules that convert immune cells from being protective to being inflammatory, promoting allergic reactions,” he said.
The research team—led by Patrick Burkett, HMS instructor in medicine and a pulmonologist and researcher at Brigham and Women’s; Antonia Wallrapp an HMS visiting graduate student in neurology at the Evergrande Center; Samantha Riesenfeld, HMS research fellow in neurology in the Klarman Cell Observatory (KCO) at the Broad; Monika Kowalczyk of the KCO; Aviv Regev, Broad core institute member and KCO director; and Kuchroo—closely examined lung-resident innate lymphoid cells (ILCs), a type of immune cell that can play a role in maintaining a stable environment and barrier in the lungs but can also promote the development of allergic inflammation.
Single-cell RNA sequencing Using a technique known as single-cell RNA sequencing, the team explored more than 65,000 individual cells that exist under normal or inflammatory conditions, looking for genes that were more active in one state or subpopulation versus another.
“By surveying thousands of individual cells, we were able to define the transcriptional landscape of lung-resident ILCs, observing changes in discrete subpopulations,” said Kowalczyk.
“To really understand the puzzle that is allergy and asthma, we need to closely examine each of the pieces individually and understand how they fit together into an ecosystem of cells,” said Regev. “That’s what single-cell analysis lets you do. And when you look this closely, you find that pieces that you thought were quite similar are subtly but profoundly different. Then you start to see where each piece really goes.”
Among many distinguishing genes they found, one in particular stood out: Nmur1, a receptor for the neuropeptide NMU.
In laboratory and animal model experiments, the team confirmed that NMU signaling can significantly amplify allergic inflammation when high levels of alarmins—molecules known to trigger immune responses—are present.
The team also observed that ILCs co-located with nerve fibers in the lung. Neurons in the lung can induce smooth muscle contractions that manifest themselves as coughing and wheezing, two central symptoms of asthma.
Coughing and inflammation “Coughing is something regulated and controlled by the nervous system so it’s intriguing that our findings point to a role for NMU, which can induce both smooth muscle contraction and inflammation,” said Burkett.
Interestingly, two additional Nature papers released simultaneously with the Regev and Kuchroo team’s study revealed that ILC2 cells in the gut also express Nmur1, take on an inflammatory state when exposed to NMU and live in close proximity to NMU-producing nerve cells.
“We anticipate that the NMU-NMUR1 pathway will also play a critical role in amplifying allergic reactions in the gut and promote development of food allergies,” said Kuchroo.
In addition to uncovering a novel neuro-immune pathway that leads to inflammation, the team also hopes their findings will lead to new therapeutic insights for how to potentially prevent or treat allergic asthma.
“We may have identified a way of blocking allergic lung inflammation by controlling neuropeptide receptors,” said Riesenfeld. “This work represents a mechanistic insight that could lead to the development of a new therapeutic approach for preventing asthma.”
“All forms of allergy and inflammation involve complex interactions between many cells and tissues,” Regev added. “Working collaboratively to identify and catalog all these various players and listening to what they say to each other can teach us surprising things about how allergies work and show us new opportunities to intervene.”
Support for this study was provided by the Food Allergy Science Initiative; the Klarman Family Foundation; the National Institute of Allergy and Infectious Diseases; the National Heart, Lung, and Blood Institute; the Howard Hughes Medical Institute; and other sources. View at the original source
Technique has potential to help reverse the most common type of disease-associated mutations.
Harvard and Broad Institute researchers have developed a DNA base editor that transforms A•T base pairs into G•C base pairs, and could one day be used to treat many common genetic diseases. Scientists at Harvard University and the Broad Institute of MIT and Harvard have developed a new class of DNA base editor that can alter genomic structure to help repair the type of mutations that account for half of human disease-associated point mutations. These mutations are associated with disorders ranging from genetic blindness to sickle-cell anemia to metabolic disorders to cystic fibrosis.
A team of researchers led by David Liu, professor of chemistry and chemical biology at Harvard University and a core institute member of the Broad, developed an adenine base editor (ABE) capable of rearranging the atoms in a target adenine (A), one of the four bases that make up DNA, to resemble guanine (G) instead, and then tricking cells into fixing the other DNA strand to make the change permanent. The result is that what had been an A•T base pair is changed to a G•C base pair. The new system is described in a paper published online in the journal Nature.
In addition to Liu, the study was led by Nicole Gaudelli, a postdoctoral fellow in Liu’s lab; Alexis Komor, a former postdoctoral fellow in Liu’s lab who is now an assistant professor at the University of California, San Diego; graduate student Holly Rees; and former postdoctoral fellows Ahmed H. Badran and David I. Bryson.
The new system transforms A•T base pairs into G•C base pairs at a target position in the genome of living cells with surprising efficiency, the researchers said, often exceeding 50 percent, with virtually no detectable byproducts such as random insertions, deletions, translocations, or other base-to-base conversions. The adenine base editor can be programmed by researchers to target a specific base pair in a genome using a guide RNA and a modified form of CRISPR-Cas9 that no longer cuts double-stranded DNA.
Being able to make this type of conversion is particularly important because approximately half of the 32,000 disease-associated point mutations already identified by researchers are a change from a G•C base pair to a A•T base pair.
Liu said that particular change is unusually common in part because about 300 times a day in every human cell, a spontaneous chemical reaction converts a cytosine (C) base into uracil (U), which behaves like thymine (T). While there are natural cellular repair mechanisms to fix that spontaneous change, the machinery is not perfect and occasionally fails to make the repair. The result can be the mutation of the G•C base pair to an A•U or A•T base pair, which can lead to certain genetic diseases.
“Because of this slight chemical instability of the Cs in our genome, about 50 percent of pathogenic point mutations in humans are of the type G•C to A•T,” said Liu said. “What we’ve developed is a base editor, a molecular machine, that in a programmable, irreversible, efficient, and extremely clean way can correct these mutations in the genome of living cells. For some target sites, that conversion reverses the mutation that is associated with a particular disease.”
A major addition to genome-editing technologies, the adenine base editor joins other base-editing systems recently developed in Liu’s lab, such as BE3 and its improved variant, BE4. Using these base editors, researchers can now correct all the so-called “transition” mutations — C to T, T to C, A to G, or G to A — that together account for almost two-thirds of all disease-causing point mutations, including many that cause serious illnesses that currently have no treatment. Additional research is needed to enable the adenine base editor to target as much of the genome as possible, as Liu and his students previously did through engineering variants of BE3.
At first glance, Liu said, it might appear as though developing the adenine base editor would be a straightforward process: Simply replace the enzyme in BE3 that performs the “chemical surgery” to transform C into U with one that could convert A into I (inosine), a nucleotide that behaves similarly to G. Unfortunately, he said, there is no such enzyme that works in DNA, so Liu and colleagues made the unusual choice to evolve their own DNA adenine deaminase, a hypothetical enzyme that would convert A to I in DNA.
“This wasn’t a small decision, because we’ve had a longstanding rule in the lab that if step one of your project is to evolve the starting material that’s needed for the rest of the project to begin, that’s not a very good project, because it’s really two major projects,” Liu said. “And if you have to spend years just to get the starting material for the rest of your project, that’s a tough road.
“In this case, we felt the potential impact was significant enough to break the rule, and I’m very fortunate that Nicole [Gaudelli] was brave enough to take on the challenge.”
The stakes were particularly high for Gaudelli, Liu said, “because if we weren’t able to complete step one and evolve a DNA adenine deaminase, then step two wouldn’t go anywhere, and we would have little to show for all the work.”
“Protein evolution is still largely an art as much as it is a science,” Liu said. “But Nicole has amazing instincts about how to interpret the results from each stage of protein evolution, and after seven generations of evolution, she succeeded in evolving a high-performance A base editor, which we call ABE7.10.”
The road that led to the adenine base editor required more than just evolving the starting material. After a year of work and several initial attempts that resulted in no detectable DNA editing of A•T base pairs, the team began to see the first glimmers of success, Liu said. Following three rounds of evolution and engineering, the adenine base editors were working deceptively well, until the team discovered that the system would only work on certain DNA sequences.
“At that point we could have pulled the trigger and reported a base editor that works well only at certain sites, but we thought the sequence requirements would really limit its usefulness and discourage others from moving the project forward, so we went back to the well of evolution. We changed the selections to force a base editor that would process all sites, regardless of their sequence,” Liu said. “That was a tough call, because at that point we had been working well over a year on the project, and it was very exciting that we were seeing any base editing on A•T base pairs in DNA at all.”
The team restarted its efforts with several additional rounds of evolution and engineering, now testing their adenine base editors against 17 genetic sequences that included all possible combinations of DNA bases surrounding the target A, Liu said. The final ABE7.10 variant edited sites with an average efficiency of 53 percent, and produced virtually no unwanted products.
To demonstrate the adenine base editor’s potential, Liu and colleagues used ABE7.10 to correct a mutation that causes hereditary hemochromatosis in human cells. They also used ABE7.10 to install a mutation in human cells that suppresses a disease, recreating the so-called “British mutation” found in healthy individuals who would normally develop blood diseases like sickle cell anemia. The mutation instead causes fetal hemoglobin genes to remain active after birth, protecting them from the blood diseases.
While the development of the adenine base editor is an exciting development in base editing, more work remains before base editing can be used to treat patients with genetic diseases, including tests of safety, efficacy, and side effects.
“Creating a machine that makes the genetic change you need to treat a disease is an important step forward, but it’s only one part of what’s needed to treat a patient,” Liu said. “We still have to deliver that machine, we have to test its safety, we have to assess its beneficial effects in animals and patients and weigh them against any side effects. We need to do many more things. But having the machine is a good start.”
Scientists at Wesleyan University have used electroencephalography to uncover differences in how the brains of Classical and Jazz musicians react to an unexpected chord progression. Their new study, published in the journalBrain and Cognition, sheds new light on the nature of the creative process. “I have been a classical musician for many years, and have always been inspired by the great jazz masters who can improvise beautiful performances on the spot,” explained study author Psyche Loui. “Whenever I tried to improvise I always felt inhibited and self-conscious, and this spurred my questions about jazz improvisation as a model for creativity more generally: What makes people creative improvisers, and what can this tell us about how we can all learn to be more creative?” The researchers used EEG to compare the electrical brain activity of 12 Jazz musicians (with improvisation training), 12 Classical musicians (without improvisation training), and 12 non-musicians while they listened to a series of chord progressions. Some of the chords followed a progression that was typical of Western music, while others had an unexpected progression. Louie and her colleagues found that Jazz musicians had a significantly different electrophysiological response to the unexpected progression, which indicated they had an increased perceptual sensitivity to unexpected stimuli along with an increased engagement with unexpected events. “Creativity is about how our brains treat the unexpected,” Loui told PsyPost. “Everyone (regardless of how creative) knows when they encounter something unexpected. But people who are more creative are more perceptually sensitive and more cognitively engaged with unexpectedness. They also more readily accept this unexpectedness as being part of the vocabulary. “This three-stage process: sensitivity, engagement, and acceptance, occurs very rapidly, within a second of our brains encountering the unexpected event. With our design we can resolve these differences and relate them to creative behavior, and I think that’s very cool.” Previous research has found that Jazz improvisers and other creative individuals show higher levels of openness to experience and divergent thinking — meaning the ability to “think outside the box.” But without additional research it is unclear if the new findings apply to other creative individuals who are not musicians. “We looked at three groups of subjects: jazz musicians, classical musicians, and people with no musical training other than normal schooling, so the results are most closely tied to musical training. It remains to be seen whether other types of creative groups, e.g. slam poets, cartoonists, interpretive dancers, etc. might show the same results,” Loui explained. “It would also be important to find out whether these differences emerge as a result of training, or whether they reflect pre-existing differences between people who choose to pursue training in different styles. We are currently conducting a longitudinal study to get at that question.” “This is the first paper of a string of research coming from our lab that use different methodologies to understand jazz improvisation,” Loui added. “We are also doing structural and functional MRI, as well as more behavioral testing, including psychophysical listening tests and also production tests, where we have people play music in our lab.” The study, “Jazz musicians reveal role of expectancy in human creativity“, was also co-authored by Emily Przysinda, Tima Zeng, Kellyn Maves, and Cameron Arkin. View at the original source
Probiotic bacteria in yogurt influence the balance of gut microbiota, which is associated with behavioral changes. This effect can be explained by the existence of a gut-brain axis.
Yogurt consumption increases the ingestion of probiotic bacteria, in particular Lactobacilli and Bifidobacteria, and may therefore affect the diversity and balance of human gut microbiota. Previous research found that changes in gut microbiota moderate the peripheral and central nervous system, resulting in altered brain functioning, and may have an impact on emotional behavior, such as stress and anxiety.
Gut-brain axis
The described effect suggests the existence of a gut-brain axis. Because of the bidirectional communication between the nervous system and the immune system, the effects of yogurt bacteria on the nervous system cannot be separated from effects on the immune system.
Researchers suggest that the communication between gut microbiota and the brain can be influenced by the intake of probiotics, which may reduce the level of anxiety and depression, and affect brain activity that controls emotions and sensations. Autism patients often suffer from gastrointestinal abnormalities, whereby viral infections over pregnancy have an impact on the long term, this might be reversed through consumption of specific bacteria, also found in yogurt.
As the composition of gut microbiota is different for each individual, changes in the balance and content of common gut microbes affect the production of short chain fatty acids butyrate, propionate, and acetate.
These fermentation products improve host metabolism by stimulating glucose and energy homeostasis, regulating immune responses and epithelial cell growth, and also supporting the functioning of the central and peripheral nervous systems.