Quantcast
Channel: career
Viewing all 1643 articles
Browse latest View live

The Science of Making Good Decisions 01-05

$
0
0

The Science of Making Good Decisions

image: http://blog.visme.co/wp-content/uploads/2016/08/Science-Header.jpg

The world is full of disagreements. However, one thing that everyone can agree upon is that at some point in our lives we’ve all made a decision we regret. If someone tells you that they’ve never made a bad decision, most likely they’re either lying or have convinced themselves that their bad decision was good.
Life would be easier if we intrinsically knew how to make good decisions. We would find more success in our careers and personal lives. However, history is filled with bad decision-making. When doctors said they were “completely certain” about a patient’s diagnosis, 40% of the time they were wrong, a study found.
In 1975, the Eastman Kodak company held the majority share of the US film market. Kodak decided to hold off on sharing its development of the world’s first digital camera because they feared that it would destroy their film business. Then, in the 1980s, Kodak decided not to be the 1984 Olympics official film. Fuji received the honor, and from there became a major player in the US marketplace. In 2012, Kodak filed for bankruptcy.
It’s not that people aren’t capable of making good decisions, but that they don’t use the correct decision-making methodology. Having sound gut instincts is great, but that’s only a start. Taking the time to define the problem, recognize how emotions affect decisions, learn how to utilize emotions for better decision making, and know when enough is enough are all parts of making good decisions.

Define the Problem


Read more at http://blog.visme.co/the-science-of-making-good-decisions/#EwFmrUlATTyADj3J.99

The Science of Making Good Decisions

image: http://blog.visme.co/wp-content/uploads/2016/08/Science-Header.jpg

The world is full of disagreements. However, one thing that everyone can agree upon is that at some point in our lives we’ve all made a decision we regret. If someone tells you that they’ve never made a bad decision, most likely they’re either lying or have convinced themselves that their bad decision was good.
Life would be easier if we intrinsically knew how to make good decisions. We would find more success in our careers and personal lives. However, history is filled with bad decision-making. When doctors said they were “completely certain” about a patient’s diagnosis, 40% of the time they were wrong, a study found.
In 1975, the Eastman Kodak company held the majority share of the US film market. Kodak decided to hold off on sharing its development of the world’s first digital camera because they feared that it would destroy their film business. Then, in the 1980s, Kodak decided not to be the 1984 Olympics official film. Fuji received the honor, and from there became a major player in the US marketplace. In 2012, Kodak filed for bankruptcy.
It’s not that people aren’t capable of making good decisions, but that they don’t use the correct decision-making methodology. Having sound gut instincts is great, but that’s only a start. Taking the time to define the problem, recognize how emotions affect decisions, learn how to utilize emotions for better decision making, and know when enough is enough are all parts of making good decisions.

Define the Problem


Read more at http://blog.visme.co/the-science-of-making-good-decisions/#EwFmrUlATTyADj3J.99
The Science of Making Good Decisions

Digital Marketing

The world is full of disagreements. However, one thing that everyone can agree upon is that at some point in our lives we’ve all made a decision we regret. If someone tells you that they’ve never made a bad decision, most likely they’re either lying or have convinced themselves that their bad decision was good.

Life would be easier if we intrinsically knew how to make good decisions. We would find more success in our careers and personal lives. However, history is filled with bad decision-making. When doctors said they were “completely certain” about a patient’s diagnosis, 40% of the time they were wrong, a study found.

In 1975, the Eastman Kodak company held the majority share of the US film market. Kodak decided to hold off on sharing its development of the world’s first digital camera because they feared that it would destroy their film business. Then, in the 1980s, Kodak decided not to be the 1984 Olympics official film. Fuji received the honor, and from there became a major player in the US marketplace. In 2012, Kodak filed for bankruptcy.

It’s not that people aren’t capable of making good decisions, but that they don’t use the correct decision-making methodology. Having sound gut instincts is great, but that’s only a start. Taking the time to define the problem, recognize how emotions affect decisions, learn how to utilize emotions for better decision making, and know when enough is enough are all parts of making good decisions.

Define the Problem

It used to be difficult to find information. There wasn’t the Internet, a 24/7 news media cycle, cellphones, or social media. You couldn’t just go online to research databases from your couch to find what you needed.

In today’s world, there’s a glut of information. It’s easy to become lost in that information. It’s easy to make mistakes, because all that extra information gets in the way of you seeing the real issue.
The book, Blink: The Power of Thinking Without Thinking, shows how too much information can harm people. It makes the case that when doctors are attempting to diagnose patients, “extra information is more than useless. It’s harmful.” Doctors need to know the pertinent information for that patient, not every piece of data that exists on a particular diagnosis. Knowing too much information overcomplicates and confuses an issue.

When making good decisions, it’s not about knowing as much information as you can; it’s about knowing the right information. Knowing the correct information helps you to identify the root problem, and solve it.

A quality of a good CEO is the ability to weed through the massive amount of information in the world and identify the pertinent information in a timely manner. This ability takes objectivity. A CEO cannot be so swayed by his emotions, his opinions, and his beliefs that he ignores the facts. However, all CEOs are human, and since all humans experience emotions, all decisions humans make are to some extent affected by emotions.

Recognize Emotions

Emotions are an intrinsic part of being human. This means that no matter how hard we try to keep subjectivity out of decision-making, we will not succeed. What we can do is learn how to recognize emotions and use them to our advantage.

One of the major pitfalls of emotions is that we aren’t all that great at controlling our gut reactions. We’ve all regretted doing something in the heat of the moment. Our rational self vanished and we did or said something that was detrimental. This immediate and overpowering emotional response to some stimulus is known as an amygdala hijack.

In the professional world, an amygdala hijack could destroy a person’s career.

Take a look at Eliot Spitzer. He was the governor of New York, until he resigned because of involvement in a prostitution ring. Spitzer had graduated from Princeton and Harvard law school. Before becoming mayor, he’d been an aggressive and successful corporate fraud and organized crime prosecutor. But his emotions—his need for pleasure—overwhelmed his rational thinking. He was hijacked by his amygdala—the emotional center of the brain—made poor decisions, and lost everything.

An amygdala hijack doesn’t just affect the person experiencing the uncontrolled gut reaction. Emotions are contagious. They spread from person to person. If the CEO of a company is hijacked, his emotional outburst may influence his co-workers and subordinates. The company’s ability to make good decisions may decrease, as collaboration among employees deteriorates.
A person’s ability to control his emotions is principal to making good decisions. However, focusing solely on logic—only paying attention to flow charts, market movement, risk versus reward, and statistical trends—can lead to horrendous flaws in logic and poor decision making.
The Great Depression resulted from a variety of factors, including the 1929 stock market crash, over 9,000 banks failing in the 1930s, a vast reduction in product purchasing, high import taxes for foreign countries, and the 1930 Mississippi Valley drought that left the area with the nickname, “The Dust Bowl.”

Similarly, the rapid decline of household consumption and housing construction contributed to the Great Recession. In the 1980s, a massive, unsustainable boom in consumer spending occurred, while income growth largely decreased. People borrowed more and more money, indebting themselves through novel mortgage lending plans, until in 2006, when interest rates rose, refinancing opportunities fell, lender credit dried up, and homeowners defaulted on their mortgages. Consumption seeking stalled and the American society greatly cut back spending. The result was a sharp decline in demand, and the Great Recession.

With both the Great Depression and Great Recession, the aftermaths led to realizations about widespread denial, greed, and lack of awareness. People, whether a bank or consumer, were busy wanting more, and therefore were blinded to the long-term implications of their theoretically logical solutions and to unrealized underlying emotions for making and having more. By understanding and embracing emotion, you can make better decisions.

Utilize Emotions

Remaining calm is essential for good decision making. Once you realize what emotions are wrapped up in the problem you’re facing, you cannot let them overpower a well thought-out decision.
In 2009, over 61,000 tattoos were removed in the United States. Many of those tattoos had been the result of an emotional reaction. People didn’t think about whether or not they should get a tattoo. They let their short-term emotions take the reigns and lost the ability to think clearly, instead of using their emotions to bolster their logical reasoning.

Some of the most successful decision makers are samurais and Special Forces. Why? Because they know how to stay calm and in control.

Samurais train as much mentally as they do physically because they believe that people should exhibit calmness both in battle and everyday life. Situations should be approached with awareness and alertness, as well as an unbiased attitude.

Special Forces are extremely selective. They look for individuals who are not only skilled and determined, but who are also emotionally stable. Navy Seals use four techniques to increase recruits’ chances of passing their program and of making good decisions in the field:


Self-awareness is “having a clear perception of your personality, including strengths, weaknesses, thoughts, beliefs, motivation, and emotions.” Without self-awareness, you cannot empathize and without empathy, you cannot understand other people. Without understanding other people, you can’t identify what motivates them, how they’ll respond, and what opportunities exist. Without these things, you cannot make good decisions.

One very important emotion is empathy. Empathy is “the ability to sense other people’s emotions, coupled with the ability to imagine what someone else might be thinking or feeling.” To empathize, you must know yourself. You must possess self-awareness.

As the Harvard Business Review stated, “Executives who fail to develop self-awareness risk falling into an emotionally deadening routine that threatens their true selves. Indeed a reluctance to explore your inner landscape not only weakens your own motivation but can also corrode your ability to inspire others.”

Nobel Prize winner Daniel Kahneman developed a theory explaining why people don’t make 100% rational economic decisions. Kahneman stated that there are two overarching thought processes:
“System 1” is the “automatic, intuitive mind.” It carries out the majority of people’s everyday activities. It’s the “going with your gut,” and what you perceive to be correct. System 1 is efficient, but not always accurate.

“System 2” is the “controlled, deliberative, analytical mind.” It performs the functions that System 1 cannot. System 2 requires a great deal of focus, but can sometimes make up for System 1’s inaccuracy.

While System 1 doesn’t involve actively thinking about what you’re doing—you already know how to walk and make toast, so you don’t have to deliberately focus on those activities—System 2 requires you to produce certain thoughts that enable you to perform a particular function.

However, System 1 causes people to form snap judgments about others, because System 1 contains emotions and biases. When you meet someone, you form an opinion of them within the first few seconds. Say you shake someone’s hand and the handshake is strong, so you believe that the person you’re meeting is confident. This opinion may be correct, but oftentimes it isn’t. You need more information before forming an accurate opinion. This is where System 2 comes in. System 2 forces you to step back, slow down, and analyze the situation, so that you’re less likely to make a rash decision.

Follow this link to take a quiz from Kahneman’s book, Thinking, Fast and Slow, to see if you’re immune to logical inaccuracies: Thinking, fast or slow: can you do it?

Know When Enough Is Enough

After identifying the problem, recognizing emotions, and utilizing emotions, you have to know when to make a decision. In life, instances where decisions don’t need to be made are few and far between, and when a decision must be determined, there’s usually a timeline attached.

Striving to make the perfect decision will lead to stress and the feeling of being overwhelmed, and if you’re overwhelmed, your ability to make good decisions decreases drastically.

As James Waters, once The White House’s Deputy Director of Scheduling, stated: “Being able to make decisions when you know you have imperfect data is so critical.”

Waters was instructed that “A good decision now is better than a perfect decision in two days.” It’s a lesson he’s carried with him, and one that many analysts can’t believe.

In the business world, analysts play a huge role. Their job is to collect and analyze as much data as possible to produce the best result for a particular company. However, Waters encourages “people to make a decision with imperfect information.” He believes that the ability to make decisions with incomplete data is “really important for leaders to incorporate. It’s something that the White House has to do all the time. It’s great to analyze things but at some stage you’re just spinning your wheels.”90% of all information transmitted to our brains is visual.

View at the original source

National Employability Report 02-10

$
0
0




EXECUTIVE SUMMARY

The key findings of the present study are as follows:

No significant improvement in employability in the last four years We did the previous large scale study of employability of engineers in 2014. We had found that only 18.43% of engineers were employable for the software services sector, 3.21% for software products and 39.84% for a non-functional role such as Business Process Outsourcing.

Unfortunately, we see no massive progress in these numbers. These numbers as of today stand at: 17.91%, 3.67% and 40.57% respectively for IT Services, IT Products and Business Process Outsourcing. This is despite the fact that the number of engineering seats have not increased in the past year. We are not inferring that all initiatives for employability improvement have failed and there may be pockets of excellence present. However, the need of the hour is to find these pockets and scale them up to make an exponential impact on employability. This is crucial for India to continue its growth story and achieve the PM's vision of India becoming the human resource provider for the whole world.

Only 3.84% folks employable for startup software engineering jobs Investments and growth of technology startups is the new business story in India. Ratan Tata recently said that India is becoming the Silicon Valley of the 1990s. To sustain this growth, we need candidates with higher technology caliber, understanding of new products and requirements and the attitude to work in a startup. With this in mind, we specifically captured employability for startup technology roles this time. Unfortunately, we find that only 3.84% of engineers qualify for a startup technology role. This is a big concern and would surely hamper the growth of startups in India. It may also cause the market to be diluted with a lot of low quality products floating around.

More aspiration to work for startups Last year, we had found 6% students were interested to work for a startup. This year it is up by 33% to 8%. Students from tier 1 colleges are most motivated to work in startups as compared to others. It is also observed that inclination of males is strikingly high to work with startups than that of females. Among all of these, more students as compared to last year are interested to work for startups. While this is good news, there is a still a long way to go as only a handful of candidates (8%) are interested to work for startups.

Higher salary aspiration and higher salary for same skill This year on, we find that students have higher salary aspirations. Last year the median salary aspiration was INR 310 thousand, which is now INR 340 thousand implying that the market is also paying higher salaries. The median salary for the same skill was INR 282 thousand last year, which is INR 313 thousand this year. This means that talent is getting expensive and we believe this is due to the huge demand of manpower in technology sector and lack of supply. However, it is important to note that this supply is artificially low: more than 25% of employable candidates are beyond the top 750 engineering colleges. This pool of candidates is missed out by companies and to make sure that the war for talent doesn't lead to salaries going out of control, we need to find ways of better meritocratic matching of students with jobs.

View the entire report here

Canadian Prime Minister Justin Trudeau, now Rock star of LinkedIn Marketing too...02-12

$
0
0

A version of this post originally appeared on LinkedIn. All information in this post is publicly accessible and does not use any private or confidential data. Opinions and views expressed are Ms. Urbanski's and do not necessarily represent LinkedIn Corporation. 







Recently, Canada's Prime Minister, Justin Trudeau surpassed 2 million followers on LinkedIn — a year after becoming an official Influencer and two months after being named as one of LinkedIn’s Top Voices in 2017. The engagement he’s garnered with LinkedIn’s members proves that content related to the economy, government news/info, and politics all has a place on our platform, in our newsfeed and with our members.

But as savvy as Trudeau’s social team was, they were new to using LinkedIn and experienced the same learning curve that any of us do when we first start using a new platform. This post shares what I have learned working with Trudeau’s team over the past year and how you can make the most out of using LinkedIn’s platform.

Post often

We saw the most success during times that Trudeau was posting every few days. Having a couple weeks pass in between each update limits momentum and your ability to stay top-of-mind with your network. Social media channels have algorithms that reward high engagement so you always want your voice to be circulating throughout your network through a mix of posts, comments, questions, likes, and shares.

Tag people or companies that you mention

We saw the biggest spike in followers and engagement when Trudeau promoted his meeting with Microsoft CEO Satya Nadella in May 2017 where he tagged both him and Brad Smith in the post. We saw the same affect on his most recent post about meeting with Satya at the World Economic Forum. This tagging helps members discover who other people are and notifies the member that you mentioned them, thus increasing the chance that they will engage with it and be seen by their network (as was the case when both LinkedIn CEO Jeff Weiner and Satya Nadella liked and commented on Trudeau's posts and thus being exposed to their 7 million and 3 million followers, respectively).

Use hashtags

Trudeau’s team loves to use relevant hashtags to promote their work on important issues or associate themselves with world events. Using hashtags makes your content discoverable and helps you to join popular conversations. Some great examples are #WEF18, #GEW2017, #GoNorth17 and #CPTPP.

Take the time to find strong images

Posting rich media and photographs perform better than generic images. In the early days of posting on LinkedIn, Trudeau would shares links from his official press releases which pull in a generic image of Canada’s coat of arms. You can see a significant difference in engagement with these posts versus the high quality images of him meeting with world leaders, speaking at an event or showcasing the cities he visits across Canada.

Use LinkedIn’s native video feature

People love video. They love to see who you truly are, what you’re doing and how you sound. It makes them feel like they are part of the moment or getting a sneak peak behind-the-scenes. Prime Minister Trudeau was officially the first world leader to use LinkedIn’s native video feature which was released in May 2017 and the engagement was off the charts. In his video he spoke directly to his LinkedIn followers — thanking them for their engagement and asking what they want to hear more of. The team then used the feedback they received to guide some of the content they posted moving forward — related to the comments left by members and the topics they are most interested in.

Be authentic

Posting content on a professional platform like LinkedIn doesn't mean you have to be a robot. We see a huge difference in interest and engagement when Trudeau posts with an authentic voice — using our native video feature or showcasing his meetings across Canada and the world — versus scripted long-form posts like his official PMO Press Releases. For most members, you don't have the same intense public pressure weighing on your shoulders like our PM, so you have an even greater opportunity to showcase yourself as an individual. I know this part can sometimes be scary but don't worry, your network will do a great job of supporting you and making you feel good. Just try and see!

Customize your URL

Ever noticed the weird mix of numbers and letters following your name in your LinkedIn URL? You can easily edit this on the right-hand rail of your LinkedIn profile under the "Contact and Personal Info" section. Creating a custom URL allows your content to easily be found in online search tools and can be included in your email signature, on business cards or other resources to increase traffic to your LinkedIn page.

But, let's take things to the next level!

The success Trudeau and his team has seen in using LinkedIn has been amazing! The climb to more than 2 million followers has been fun to watch. But there are a few things that Prime Minister Trudeau could be doing more of (and maybe you can too!)

Engagement with other people’s content

Engaging with other people's content by liking, sharing or commenting helps to expand the conversation and the relationships you have with your network. It also helps to grow your professional brand and increase your followers. Engagement actions help to bring an idea or topic to life and that's when the real energy of networking happens. You'd be amazed at how many new contacts I’ve made on LinkedIn by commenting on their posts or sharing their content.

Add all your business contacts

As you can tell through his updates, our Prime Minister has a very busy travel schedule where he meets with tons of world and business leaders. After these meetings, he should use LinkedIn to connect with these contacts and easily maintain their working relationship. Adding these connections to your network allows you to communicate news and updates at scale – which is a great efficiency tip for those who are as busy as the PM! (and even those who aren’t).

Follow other thought leaders

While Trudeau is one of the most followed world leaders on LinkedIn, there are many others who are posting great content related to some of today’s most important issues. Trudeau has a great opportunity to learn from what other thoughts leaders are doing, the type of content they’re posting, and the engagement results they see on various topics. Some of my favourite government thought-leaders include Australian Prime Minister, Malcom Turnbull; President of the French Republic, Emmanuel Macron; CIO for the Government of Canada, Alex Benay; Canada's Minister of Innovation, Science and Economic Development, Navdeep Bains; India's Prime Minister, Narendra Modi; Ontario Premier, Kathleen Wynne; and Mississauga's Mayor, Bonnie Crombie.

As a Canadian, I am grateful to have a Prime Minister who uses modern techniques to connect with people across our country and openly share news and information. I believe that we can all benefit in using a similar strategy to be better professionals, more effective leaders, advance our careers, achieve our goals, and collaborate with our peers.

View at the original source

Turning workers into 'super workers' with robotic suits 02-14

$
0
0


If you've watched the Iron Man film franchise, you'll know that a powered suit gives inventor Tony Stark superhuman strength to fight the bad guys.
But away from the the fictional world of blockbusting movies, robotic exoskeletons offer more prosaic and useful help for humans.
The military has been in on the act for years, using them to help soldiers carry more weight for longer periods of time. Meanwhile manufacturers have been busy creating robotic suits to give mobility to people with disabilities.
But now exoskeletons are becoming an important part of the scene in more conventional workplaces, mainly because of their unique offering.
"Exoskeletons act as a bridge between fully-manual labour and robotic systems. You get the brains of people in the body of a robot," says Dan Kara, research director at ABI Research.
"But there's more to it than that. You can tie the use of exoskeletons to business benefits that are very easy to quantify. The main one is a reduction in work-related injuries, and we know that outside the common cold, back injury is the main reason people are off work."



The motor industry has used robots for many years. But robots can't do everything, points out technical expert Marty Smets, of Ford's human systems and virtual manufacturing unit.
"In our plants, we see a need for both people and robots," he says.
Some Ford assembly line workers lift their arms up to 4,600 times a day - that's about a million times a year. That sort of repetition leaves many suffering from back-ache and neck pain.
Now, though, the company has equipped staff at two US assembly plants with a device called the EksoVest, from California-based Ekso Bionics. It helps take the strain by giving workers an extra 5-15lb (2.2-6.8kg) of lift per arm.
"Incredible is the only word to describe the vest," said Paul Collins, an assembly line worker at Ford Michigan assembly plant. "It has made my job significantly easier and has given me more energy throughout the day."
The company says it's already seeing a dramatic decline in work-related injuries and is now planning to introduce the exoskeletons at facilities in Europe and South America.



Currently, the industrial use of exoskeletons is relatively small - this year only a few thousand have been sold, says ABI's Kara. But, he says, the potential market could be in the millions.
The types of exoskeleton used for rehabilitation can cost more than $100,000 (£75,000), needing, as they usually do, to replace a user's muscles altogether. However, industrial versions can be far cheaper, at around $5,000.
They generally augment human strength rather than replace it and tend to enhance one part of the body only. They also often don't need any external power. Instead, they can deliver a 10-20% boost to the user's lifting power by transferring weight to the ground.
In Japan, exoskeletons are being used for heavy lifting in the shipbuilding industry as well as in large commercial construction projects.
Meanwhile, US retailer Home Depot is testing exoskeletons to help workers unload trucks and bring materials onto the floor.
Another early adopter is Lockheed Martin, which is using its own Fortis exoskeleton to allow workers to operate tools for much longer periods. It has a support structure that transfers the weight of heavy loads from the operator's body directly to the ground through a series of joints at the hips, knees and ankles.
It can also be used with an arm that supports the weight of a tool helps isolate vibration and torque kick - rotational force - from the user. Workers using the devices, says Lockheed Martin, report two-thirds less fatigue, with higher quality work, greater productivity and fewer musculoskeletal injuries.




Other companies are producing powered industrial exoskeletons that are rather more like the suits from the movies. Sarcos, for example, offers three models, with the biggest - the Guardian GT (pictured) - handling more than 450kg with its 2m (7ft) arms.
"I think powered exoskeletons will become ubiquitous for industrial applications around the world. These devices will materially reduce occupational injuries while also dramatically improving productivity," says chief executive officer Ben Wolff.
"Additionally, these devices can extend the useful life of an aging work force, and can make jobs open for more people that previously could have only been handled by people of larger physical stature." 
Other augmentation technologies are even stranger. Researchers at Cornell's Sibley School of Mechanical and Aerospace Engineering, for example, have developed a robotic "third arm" that attaches to the user's elbow. The group says it sees applications in package handling, warehouses, and even restaurants.
"A third arm device would enhance a worker's reach, and allow them to access objects without having to reach or bend. This would be useful in pick-and-place tasks where the worker is moving, such as retrieving packages from warehouse shelves," says researcher Vighnesh Vatsal.
"It would also provide support in assembly tasks in challenging environments such as construction sites, for instance by holding a work piece steady while a worker operates on it with power tools using their own hands."
In the longer term, industry experts say the price of exoskeletons will fall further, meaning they could move into many more areas of work. They could even find a place in private life, with applications in DIY, gardening and sports such as hiking.
So while we'll never be likely to be able to emulate the exploits of comic book heroes, exoskeletons could help with mundane household chores such as ironing. So not so much Iron Man - more "ironing man", perhaps? 

How to Get People Addicted to a Good Habit 02-16

$
0
0
























Reshmaan Hussam and colleagues used experimental interventions to determine if people could be persuaded to develop a healthy habit. Potentially at stake: the lives of more than a million children.
A few years ago, Reshmaan Hussam and colleagues decided to find out why many people in the developing world fail to wash their hands with soap, despite lifesaving benefits.

Every year more than a million children under the age of five die from diarrheal diseases and pneumonia. Washing hands with soap before meals can dramatically reduce rates of both diarrhea and acute respiratory infections.

To that end, major health organizations have poured a lot of money into handwashing education campaigns in the developing world, but to little avail. Even when made aware of the importance of a simple activity, and even when provided with free supplies, people continue to wash their hands without soap—if they wash their hands at all.

“If you look at these public health initiatives, you see that they are often a complicated combination of interventions: songs and dances and plays and free soap and water dispensers,” says Hussam, an assistant professor at Harvard Business School whose research lies at the intersection of development, behavioral, and health economics. “Which means that when these initiatives don’t work, nobody can say why.”

When Hussam and her fellow researchers conducted their initial survey of several thousand rural households in West Bengal, India, they discovered that people don’t wash their hands with soap for the same reason most of us don’t run three miles every morning or drink eight glasses of water every day, despite our doctors lecturing us on the benefits of cardiovascular exercise and hydration. It’s not that we are uninformed, unable, or lazy. It’s that we’re just not in the habit.

“The idea is that habits are equivalent to addictions”

With that in mind, the researchers designed a field study to understand whether handwashing with soap was indeed a habit-forming behavior, whether people recognized it as such, whether it was possible to induce the habit with experimental interventions, and whether the habit would continue after the interventions ceased.

The field experiment was based on the theory of “rational addiction.” Developed by economists Gary Becker and Kevin Murphy, the theory posits that addictions are not necessarily irrational. Rather, people often willingly engage in a particular behavior, despite knowing that it will increase their desire to engage in that behavior in the future (i.e. become “addicted”). As “rational addicts,” people can weigh the costs and benefits of their current behavior taking into consideration its implications for the future, and still choose to engage.

One way to test whether people are in fact “rational” about their addictions, Hussam says,is by looking at how changes in the future cost of the behavior affect them today. For example, if a rational addict learns that taxes on cigarettes are going to double in six months, she may be less likely to take up smoking today.

Hussam remains agnostic on whether the behavior of addicts (to cigarettes, drugs, or alcohol, for example) can be fully understood by the theory of rational addiction—“a theory that fails to explain why addicts often regret their behavior or regard it as a mistake,” she says. But she found the framework, which has historically been applied only to harmful behaviors, was useful to shift into the language of positive habits.

“Habits, after all, are like a lesser form of addiction: The more you engage in the past, the more likely you are to engage today,” she says. “And if that’s the case, do people recognize—are they ’rational’ about—the habitual nature of good behaviors? If they aren’t, it could explain the underinvestment in behaviors like handwashing with soap that we see. If they are rational, it can affect the design of interventions and incentives that policymakers can offer to encourage positive habit formation.”

The team’s experiment and findings are detailed in the paper Habit Formation and Rational Addiction: A Field Experiment in Handwashing (pdf), authored by Hussam; Atonu Rabbani, an associate professor at the University of Dhaka; Giovanni Reggiani, then a doctoral student at MIT and now a consultant at The Boston Consulting Group; and Natalia Rigol, a postdoctoral fellow at Harvard’s T.H. Chan School of Public Health.

The hand washing experiment

In partnership with engineers at the MIT Media Lab, the researchers designed a simple wall-mounted soap dispenser with a time-stamped sensor hidden inside. The sensor allowed the team to determine not only how often people were washing their hands, but also whether they were doing so before dinnertime, critical to an effective intervention. (The idea for the hidden sensors came from a scene in Jurassic World in which one of the characters smuggles dinosaur embryos in a jury-rigged can of Barbasol shaving cream.) The data gave the researchers the ability to tease apart behavioral mechanisms in a way that earlier work (which often used self-reports or surveyor observations of hand hygiene) could not do.

The researchers were also mindful about which type of soap to use in the dispensers. Through pilot tests, they found that people preferred foam, for example. “They didn’t feel as clean when the soap wasn’t foamy,” Hussam says.

And because all people in the experiment ate meals with their hands, they were turned off by heavily perfumed soap, which interfered with the taste of their food. So the experiment avoided strongly scented soap. That said, “we preserved some scent, as the olfactory system is a powerful sensory source of both memory and pleasure and thus easily embedded into the habit loop,” the researchers explain in the paper.

The experiment included 3,763 young children and their parents in 2,943 households across 105 villages in the Birbhum District of West Bengal, where women traditionally manage both cooking and childcare. A survey showed that 79 percent of mothers in the sample could articulate, without being prompted, that the purpose of soap is to kill germs.

But while more than 96 percent reported rinsing their hands with water before cooking and eating, only 8 percent said they used soap before cooking and only 14 percent before eating. (Hussam contends that these low numbers are almost certainly overestimates, as they were self-reported.) Some 57 percent of the respondents reported that they didn’t wash their hands with soap simply because “Obhyash nai,” which means “I do not have the habit,” Hussam says.

Monitoring vs. offering incentives

The researchers randomly divided the villages into “monitoring” and “incentive” villages, taking two approaches to inducing the hand washing habit. In each experiment, there was a randomly selected control group of households that did not receive a soap dispenser; altogether, 1,400 of the 2,943 households received dispensers.

“The monitoring experiment tried to understand the beginnings of social norm formation: whether third-party observation through active tracking by surveyors of hand washing behavior could increase hand washing rates, and whether the behavior could become a habit even after the monitoring stopped,” Hussam explains.

Among the 1,400 households that received a soap dispenser, one group was told their hand washing would be tracked from the get-go, and that they would receive feedback reports on their soap usage patterns. Another group was told their behavior would be tracked in a few months, enabling a precise test of rational habit formation—whether people would start washing their hands now if they knew that the “value” of hand washing would increase in the future. And another group was not told that soap use would be tracked.

The incentive experiment “tried to price a household’s value of hand washing and forward-looking behavior,” Hussam says—in other words, whether financial incentives could increase hand washing rates, and whether those households would keep using soap even after the incentives stopped. In one incentive group, people learned that they would receive one ticket for each day they washed their hands; the tickets could be accumulated and cashed in for various goods and gifts in a prize catalog.

In another group, people learned that they initially would receive one ticket each day for washing their hands with soap, but that in two months they would begin receiving triple the number of tickets for every day they used the dispenser. The final group received the same incentive boost two months into the experiment, but it was a happy surprise: The group had no prior knowledge of the triple-ticket future.

“The difference … is a measure of rational habit formation,” Hussam explains. “While one household is anticipating a change in future value of the behavior, the other household is not; if the first household behaves differently than their counterpart in the present, they must recognize that handwashing today increases their own likelihood of handwashing in the future.”

A clean victory

The results showed that both monitoring and monetary incentives led to substantial increases in hand washing with soap.

Households were 23 percent more likely to use soap if they knew they were being monitored. And some 70 percent of ticket-receiving households used their soap dispensers regularly throughout the experiment, compared with 30 percent of households that received the dispensers without incentives.
Importantly, the effects continued even after the households stopped receiving tickets and monitoring reports, suggesting that handwashing with soap was indeed a developable habit.

More importantly, the experiment resulted in healthier children in households that received a soap dispenser, with a 20 percent decrease in acute respiratory infections and a 30 to 40 percent decrease in loose stools on any given day, compared with children whose households did not have soap dispensers. Moreover, the children with soap dispensers ended up weighing more and even growing taller. “For an intervention of only eight months, that really surprised us,” Hussam says.

But while it appeared that handwashing was indeed a habitual behavior, were people “rational” about it? Indeed they were, based on the results of the monitoring experiment.

“Our results are consistent with the key predictions of the rational addiction model, expanding its relevance to settings beyond what are usually considered ‘addictive’ behaviors,” the researchers write.

In the incentives group, the promise of triple tickets didn’t affect behavior much, but, as Hussam notes, that may have been because the single tickets were already enough to get the children their most coveted prize: a school backpack. “Basically, we found that getting one ticket versus getting no tickets had huge effects, while going from one to three did little,” she says.

“Wherever we go, habits define much of what we do”

But in the monitoring group, handwashing rates increased significantly and immediately, not only for those who were monitored but also for those who were simply told to anticipate that their behavior would be tracked at a later date. “Simply knowing that handwashing will be more valuable in the future (because your behavior will be tracked so there’s a higher cost to shirking) makes people wash more today,” Hussam says.

This, Hussam hopes, is the primary takeaway of the study. While the experiment focused on a specific behavior in/among a specific area of India, the findings may prove valuable to anyone who is trying to develop a healthy addiction—whether it be an addiction to treating contaminated drinking water and using mosquito nets in the developing world, or an addiction to exercising every day and flossing every night in the developed world.

“Wherever we go, habits define much of what we do,” Hussam says. “This work can help us understand how to design interventions that help us cultivate the good ones.” 





Developing Novel Drugs 02-16

$
0
0















We analyze firms’ decisions to invest in incremental and radical innovation, focusing specifically on pharmaceutical research. We develop a new measure of drug novelty that is based on the chemical similarity between new drug candidates and existing drugs. We show that drug candidates that we identify as ex-ante novel are riskier investments, in the sense that they are subsequently less likely to be approved by the FDA.

However, conditional on approval, novel candidates are, on average, more valuable—they are more clinically effective; have higher patent citations; lead to more revenue and to higher stock market value. Using variation in the expansion of Medicare prescription drug coverage, we show that firms respond to a plausibly exogenous cash flow shock by developing more molecularly novel drug compounds, as opposed to more so-called “me-too” drugs. This pattern suggests that, on the margin, firms perceive novel drugs to be more valuable ex-ante investments, but that financial frictions may hinder their willingness to invest in these riskier candidates. Over the past 40 years, the greatest gains in life expectancy in developed countries have come from the development of new therapies to treat conditions such as heart disease, cancer, and vascular disease.

At the same time, the development of new–and often incremental–drug therapies has played a large role in driving up health care costs, with critics frequently questioning the true innovativeness of expensive new treatments (Naci, Carter, and Mossialos, 2015). This paper contributes to our understanding of drug investment decisions by developing a measure of drug novelty and subsequently exploring the economic tradeoffs involved in the decision to develop novel drugs.

Measuring the amount of innovation in the pharmaceutical industry is challenging. Indeed, critics argue that “pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures,” making it difficult to use simple drug counts as a measure of innovation (Light and Lexchin, 2012). To overcome this challenge, we construct a new measure of drug novelty for small molecule drugs, which is based on the molecular similarity of the drug with prior drug candidates.3 Thus, our first contribution is to develop a new measure of pharmaceutical innovation.

We define a novel drug candidate as one that is molecularly distinct from previously tested candidates. Specifically, we build upon research in modern pharmaceutical chemistry to compute a pair-wise chemical distance (similarity) between a given drug candidate and any prior candidates in our data. This similarity metric is known as a “Tanimoto score” or “Jaccard coefficient,” and captures the extent to which two molecules share common chemical substructures. We aggregate these pairwise distance scores to identify the maximum similarity of a new drug candidate to all prior candidates. Drugs that are sufficiently different to their closest counterparts are novel according to our measure. Since our metric is based on molecular properties observed at the time of a drug candidate’s initial development, it improves upon existing novelty measures by not conflating ex-ante measures of novelty with ex-post measures of success such as receiving priority FDA review.

In the United States, the sharpest decline in death rates from the period 1981 to 2001 come from the reduction in the incidence of heart disease. See Life Tables for the United States Social Security Area 1900-2100. https://www.ssa.gov/oact/NOTES/as120/LifeTables_Body.html See also Lichtenberg (2013), which estimates explicit mortality improvements associated with pharmaceuticals. One of the more vocal critics is Marcia Angell, a former editor of the New England Journal of Medicine. She argues that pharmaceutical firms increasingly concentrate their research on variations of top-selling drugs already on the market, sometimes called “me-too” drugs.

She concludes: “There is very little innovative research in the modern pharmaceutical industry, despite its claims to the contrary.” http://bostonreview. net/angell-big-pharma-bad-medicine. Indeed, empirical evidence appears to be consistent with this view; Naci et al. (2015) survey a variety of studies that show a declining clinical benefit of new drugs. Small molecule drugs, synthesized using chemical methods, constitute over 80% of modern drug candidates (Ralf Otto, Alberto Santagostino, and Ulf Schrader, 2014). We will discuss larger drugs based on biological products in Section 3.6.

Our novelty measure based on molecular similarity has sensible properties. Pairs of drug candidates classified as more similar are more likely to perform the same function—that is, they share the same indication (disease) or target-action (mechanism). Further, drugs we classify as more novel are more likely to be the first therapy of its kind. In terms of secular trends, our novelty measure indicates a decline in the innovativeness of small molecule drugs: both the number, as well as the proportion, of novel drug candidates has declined over the 1999 to 2014 period. Across our sample of drug candidates, over 15% of newly developed candidates have a similarity score of over 0.8, meaning that they share more than 80% of their chemical substructures with a previously developed drug.

We next examine the economic characteristics of novel drugs, in order to better understand the tradeoffs that firms face when deciding how to allocate their R&D resources. We begin by exploring how the novelty of a drug candidate relates to its (private and social) return from an investment standpoint. Since measuring a drug’s value is challenging, we rely on several metrics. First, we examine drug effectiveness as measured by the French healthcare system’s assessments of clinical value-added, following Kyle and Williams (2017).

Since this measure is only available for a subset of approved drugs, we also examine the relationship between molecular novelty and the number of citations to a drug’s underlying patents, which the innovation literature has long argued is related to estimates of economic and scientific value (see, e.g. Hall, Jaffe, and Trajtenberg, 2005). We also use drug revenues as a more direct proxy for economic value. However, since mark-ups may vary systematically between novel and “me-too” drugs—that is, drugs that are extremely similar to existing drugs—we also rely on estimates of their contribution to firm stock market values. Specifically, we follow Kogan, Papanikolaou, Seru, and Stoffman (2017) and examine the relationship between a drug’s molecular novelty and the change its firm’s market valuation following either FDA approval or the granting of its key underlying patents.

Conditional on being approved by the FDA, novel drugs are on average more valuable. Specifically, relative to drugs entering development in the same quarter that treat the same disease (indication), a one-standard deviation increase in our measure of novelty is associated with a 33 percent increase in the likelihood that a drug is classified as “highly important” by the French healthcare system; a 10 to 33 percent increase in the number of citations for associated patents; a 15 to 35 percent increase in drug revenues; and a 2 to 8 percent increase in firm valuations. 4To benchmark what this means, we note that the chemical structures for Mevacor and Zocor, depicted in Figure 1, share an 82% overlap.

However, novel drugs are also riskier investments, in that they are less likely to receive regulatory approval. Relative to comparable drugs, a one-standard deviation increase in novelty is associated with a 29 percent decrease in the likelihood that it is approved by the FDA. Thus, novel drugs are less likely to be approved by the FDA, but conditional on approval, they are on average more valuable.
To assess how firms view this tradeoff between risk and reward at the margin, we next examine how they respond to a positive shock to their (current or expected future) cashflows. Specifically, if firms that experience a cashflow shock develop more novel—rather than molecularly derivative—drugs, then this pattern would suggest that firms value novelty more on the margin.

Here, we note that we are implicitly assuming that treated firms have a similar set of drug development opportunities as control firms, and, moreover, that financial frictions limit firms’ ability to develop new drug candidates. Indeed, if firms face no financing frictions, then, holding investment opportunities constant, cashflow shocks should not impact their development decisions. However, both theory and existing empirical evidence suggest that a firm’s cost of internal capital can be lower than its cost of external funds.5 In this case, an increase in cashflows may lead firms to develop more or different drugs by increasing the amount of internal funds that can be used towards drug development decisions. Even if this increase in cashflows occurs with some delay, firms might choose to respond today, either because it increases the firm’s net worth, and hence its effective risk aversion (see, e.g. Froot, Scharfstein, and Stein, 1993), or because this anticipated increase in profitability relaxes constraints today.

We construct shocks to expected firm cashflows using the introduction of Medicare Part D, which expanded US prescription drug coverage for the elderly. This policy change differentially increased profits for firms with more drugs that target conditions common among the elderly (Friedman, 2009). However, variation in the share of elderly customers alone does not necessarily enable us to identify the impact of increased cashflows. This is because the expansion of Medicare impacts not only the profitability of the firm’s existing assets.

For a theoretical argument, see Myers and Majluf (1984). Consistent with theory, several studies have documented that financing frictions play a role in firm investment and hiring decisions. Recent work on this topic examines the response of physical investment (for instance, Lin and Paravisini, 2013; Almeida, Campello, Laranjeira, and Weisbenner, 2011; Frydman, Hilt, and Zhou, 2015); employment decisions (Benmelech, Bergman, and Seru, 2011; Chodorow-Reich, 2014; Duygan-Bump, Levkov, and Montoriol-Garriga, 2015; Benmelech, Frydman, and Papanikolaou, 2017); and investments in R&D (see e.g. Bond, Harhoff, and van Reenen, 2005; Brown, Fazzari, and Petersen, 2009; Hall and Lerner, 2010; Nanda and Nicholas, 2014; Kerr and Nanda, 2015). These frictions may be particularly severe in the case of R&D: Howell (2017) shows that even relatively modest subsidies to R&D can have a dramatic impact on ex-post outcomes.

Contd on page 2....

Developing Novel Drugs 2 02-16

$
0
0














To isolate the causal impact of cash flows on development decisions, we exploit a second source of variation: remaining drug exclusivity (patent life plus additional exclusivity granted by the FDA). Even among firms with the same focus on the elderly, those with more time to enjoy monopoly rights on their products are likely to generate greater profits.

With these two dimensions of variation—elderly share and remaining exclusivity–we can better control for confounders arising from both individual dimensions. For example, firms with more existing drugs for the elderly may differentially see a greater increase in investment opportunities as a result of Part D, even absent any changes to cash flow.

Meanwhile, firms with longer remaining exclusivity periods on their products may have different development strategies than firms whose drugs face imminent competition, again, even absent changes to cash flows. Our strategy thus compares firms with the same share of drugs sold to the elderly and the same remaining exclusivity periods across their overall drug portfolio, but that differ in how their remaining patent exclusivity is distributed across drugs of varying elder shares. This strategy allows us to identify differences in expected cash flow among firms with similar investment opportunities, and at similar points in their overall product lifecycle.

We find that treated firms develop more new drug candidates. Importantly, this effect is driven by an increase in the number of chemically novel candidates, as opposed to “me-too” candidates. Further, these new candidates are aimed at a variety of conditions, not simply ones with a high share of elderly patients, implying that our identification strategy is at least partially successful in isolating a shock to cash flows, and not simply picking up an increase in investment opportunities for high elderly share drugs.

In addition, we find some evidence that firm managers have a preference for diversification. The marginal drug candidates that treated firms pursue often include drugs that focus on different diseases, or operate using a different mechanism (target), relative to the drugs that the firm has previously developed. These findings suggest that firms use marginal increases in cash to diversify their portfolios and undertake more exploratory development strategies, a fact consistent with models of investment with financial frictions (Froot et al., 1993), or poorly diversified managers (Smith and Stulz, 1985).

Finally, our point estimates imply sensible returns to R&D. A one standard deviation increase in Part D exposure leads to an 11 percent increase in subsequent drug development, relative to less exposed firms. For the subset of firms for which we are able to identify cash flow, this translates into an elasticity of the number of drug candidate with respect of R& expenditure of about 0.75.

We obtain a higher elasticity for the most novel drugs (1.01 to 1.59) and a lower elasticity for the most similar drugs (0.02 to 0.31). For comparison, estimates of the elasticity of output with respect to demand (or cash flow) shocks in the innovation literature range from 0.3 to 4 (Henderson and Cockburn, 1996; Acemoglu and Linn, 2004; Azoulay, Graff-Zivin, Li, and Sampat, 2016; Blume-Kohout and Sood, 2013; Dranove, Garthwaite, and Hermosilla, 2014).

Our results suggest that financial frictions likely play a role in limiting the development of novel drug candidates. The ability to observe the returns associated with individual projects is an important advantage of our setting that allows us to make a distinct contribution to the literature studying the impact of financial frictions on firm investment decisions. Existing studies typically observe the response of investment (or hiring) aggregated at the level of individual firms or geographic locations.

By contrast, our setting allows us to observe the risk and return of the marginal project being undertaken as a result of relaxing financial constraints, and hence allows us to infer the type of investments that may be more susceptible to financing frictions. We find that, relaxing financing constraints leads to more innovation, both at the extensive margin (i.e., more drug candidates) but also at the intensive margin (i.e., more novel drugs). Given that novel drugs are less likely to be approved by the FDA, the findings in our paper echo those in Metrick and Nicholson (2009), who document that firms that score higher in terms of a Kaplan-Zingles index of financial constraints are more likely to develop drugs that pass FDA approval.

By providing a new measure of novelty, our work contributes to the literature focusing on the measurement and determinants of innovation. Our novelty measure is based on the notion of chemical similarity (Johnson and Maggiora, 1990), which is widely used in the process of pharmaceutical discovery.

Chemists use molecular similarity calculations to help them search chemical space, build libraries for drug screening (Wawer, Li, Gustafsdottir, Ljosa, Bodycombe, Marton, Sokolnicki, Bray, Kemp, Winchester, Taylor, Grant, Hon, Duvall, Wilson, Bittker, Danˇc´ık, Narayan, Subramanian, Winckler, Golub, Carpenter, Shamji, Schreiber, and Clemons, 2014), quantify the “drug-like” properties of a compound (Bickerton, Paolini, Besnard, Muresan, and Hopkins, 2012), and expand medicinal chemistry techniques (Maggiora, Vogt, Stumpfe, and Bajorath, 2014). In parallel work, Pye, Bertin, Lokey, Gerwick, and Linington (2017) use chemical similarity measures to measure novelty and productivity in the discovery of natural products.

Our measure of innovation is based on ex-ante information—the similarity of a drug’s molecular structure to prior drugs—and therefore avoids some of the truncation issues associated with patent citations (Hall et al., 2005). Further, since our measure is based only on ex-ante data, it does not conflate the ex-ante novelty of an idea with measures of ex-post success or of market size. By contrast, existing work typically measures “major” innovations using metrics based on ex-post successful outcomes, which may also be related to market size.

Examples include whether a drug candidate gets FDA Priority Review status (Dranove et al., 2014), or whether a drug has highly-cited patents (Henderson and Cockburn, 1996). A potential concern with these types of measures is that a firm will be credited with pursing novel drug candidates only if these candidates succeed and not when—as is true in the vast majority of cases—they fail. Similarly, outcomes such as whether a drug is first in class or is an FDA orphan drug (Dranove et al., 2014; DiMasi and Faden, 2011; Lanthier, Miller, Nardinelli, and Woodcock, 2013; DiMasi and Paquette, 2004) may conflate market size with novelty and may fail to measure novelty of candidates within a particular class.

For example, it is easier to be the first candidate to treat a rare condition than a common condition because fewer firms have incentives to develop treatments for the former. Further, measuring novelty as first in class will label all subsequent treatments in an area as incremental, even if they are indeed novel.

Our paper also relates to work that examines how regulatory policies and market conditions distort the direction of drug development efforts (Budish, Roin, and Williams, 2015); and how changes in market demand affect innovation in the pharmaceutical sector (Acemoglu and Linn, 2004; Blume-Kohout and Sood, 2013; Dranove et al., 2014). Similar to us, Blume-Kohout and Sood (2013) and Dranove et al. (2014) exploit the passage of Medicare Part D, and find more innovation in markets that receive a greater demand shock (drugs targeted to the elderly).

Even though we use the same policy shock, our work additionally exploits differences in drug exclusivity for specific drugs to identify the effect of cash flow shocks separately from changes in product demand that may increase firm investment opportunities. Indeed, we find that treated firms invest in new drugs across different categories—as opposed to those that only target the elderly—strongly suggesting that our identification strategy effectively isolates cash flow shocks from improvements in investment opportunities.

Last, our measure of novelty can help shed light on several debates in the innovation literature. For instance, Jones (2010); Bloom, Jones, Reenen, and Webb (2017) argue for the presence of decreasing returns to innovation. Consistent with this view, we find that drug novelty has decreased over time. An important caveat is that our novelty measure cannot be computed for biologics, which represent a vibrant research area.

Back to page 1... 

View the complete research paper at the origin source

Vishal Sikka: Why AI Needs a Broader, More Realistic Approach 02-16

$
0
0




















The concept of artificial intelligence (AI), or the ability of machines to perform tasks that typically require human-like understanding, has been around for more than 60 years. But the buzz around AI now is louder and shriller than ever. With the computing power of machines increasing exponentially and staggering amounts of data available, AI seems to be on the brink of revolutionizing various industries and, indeed, the way we lead our lives.

Vishal Sikka until last summer was the CEO of Infosys, an Indian information technology services firm, and before that a member of the executive board at SAP, a German software firm, where he led all products and drove innovation for the firm. India Today magazine named him among the top 50 most powerful Indians in 2017. Sikka is now working on his next venture exploring the breakthroughs that AI can bring and ways in which AI can help elevate humanity.

Sikka says he is passionate about building technology that amplifies human potential. He expects that the current wave of AI will “produce a tremendous number of applications and have a huge impact.” He also believes that this “hype cycle will die” and “make way for a more thoughtful, broader approach.”

In a conversation with Knowledge@Wharton, Sikka, who describes himself as a “lifelong student of AI,” discusses the current hype around AI, the bottlenecks it faces, and other nuances.

Knowledge@Wharton: Artificial intelligence (AI) has been around for more than 60 years. Why has interest in the field picked up in the last few years?

 Vishal Sikka: I have been a lifelong student of AI. I met [AI pioneer and cognitive scientist] Marvin Minsky when I was about 20 years old. I’ve been studying this field ever since. I did my Ph.D. in AI. John McCarthy, the father of AI, was the head of my qualifying exam committee.

The field of AI goes back to 1956 when John, Marvin, Allen Newell, Herbert Simon and a few others organized a summer workshop at Dartmouth. John came up with the name “AI” and Marvin gave its first definition. Over the first 50 years, there were hills and valleys in the AI journey. The progress was multifaceted. It was multidimensional. Marvin wrote a wonderful book in 1986 called The Society of Mind. What has happened in the last 10 years, especially since 2012, is that there has been a tremendous interest in one particular set of techniques. These are based on what are called “deep neural networks.”

Neural networks themselves have been around for a long time. In fact, Marvin’s thesis was on a part of neural networks in the early 1950s. But in the last 20 years or so, these neural network-based techniques have become extraordinarily popular and powerful for a couple of reasons.
First, if I can step back for a second, the idea of neural networks is that you create a network that resembles the human or the biological neural networks.

This idea has been around for more than 70 years. However, in 1986 a breakthrough happened thanks to a professor in Canada, Geoff Hinton. His technique of backpropagation (a supervised learning method used to train neural networks by adjusting the weights and the biases of each neuron) created a lot of excitement, and a great book, Parallel Distributed Processing, by David Rumelhart and James McClelland, together with Hinton, moved the field of neural net-related “connectionist” AI forward. But still, back then, AI was quite multifaceted.

Second, in the last five years, one of Hinton’s groups invented a technique called “deep learning” or “deep neural networks.” There isn’t anything particularly deep about it other than the fact that the networks have many layers, and they are massive. This has happened because of two things. One, computers have become extraordinarily powerful. With Moore’s law, every two years, more or less, we have seen doubling of price performance in computing. Those effects are becoming dramatic and much more visible now. Computers today are tens of thousands of times more powerful than they were when I first worked on neural networks in the early 1990s.

“The hype we see around AI today will pass and make way for a more thoughtful and realistic approach.”

The second thing is that big cloud companies like Google, Facebook, Alibaba, Baidu and others have massive amounts of data, absolutely staggering amounts of data, that they can use to train neural networks. The combination of deep learning, together with these two phenomena, has created this new hype cycle, this new interest in AI.

But AI has seen many hype cycles over the last six decades. This time around, there is a lot of excitement, but the progress is still very narrow and asymmetric. It’s not multifaceted. My feeling is that this hype cycle will produce great applications and have a big impact and wonderful things will be done. But this hype cycle will die and a few years later another hype cycle will come along, and then we’ll have more breakthroughs around broader kinds of AI and more general approaches. The hype we see around AI today will pass and make way for a more thoughtful and realistic approach.

Knowledge@Wharton: What do you see as the most significant breakthroughs in AI? How far along are we in AI development?

Sikka: If you look at the success of deep neural networks or of reinforcement learning, we have produced some amazing applications. My friend [and computer science professor] Stuart Russell characterizes these as “one-second tasks.” These are tasks that people can perform in one second. For instance, identifying a cat in an image, checking if there’s an obstacle on the road, confirming if the information in a credit or loan application is correct, and so on.

With the advances in techniques — the neural network-based techniques, the reinforcement learning techniques — as well as the advances in computing and the availability of large amounts of data, computers can already do many one-second tasks better than people. We get alarmed by this because AI systems are superseding human behavior even in sophisticated jobs like radiology or legal — jobs that we typically associate with large amounts of human training. But I don’t see it as alarming at all. It will have an impact in different ways on the workforce, but I see that as a kind of great awakening.

But, to answer your question, we already have the ability to apply these techniques and build applications where a system can learn to conduct tasks in a well-defined domain. When you think about the enterprise in the business world, these applications will have tremendous impact and value.

Knowledge@Wharton: In one of your talks, you referred to new ways that fraud could be detected by using AI. Could you explain that?

Sikka: You find fraud by connecting the dots across many dimensions. Already we can build systems that can identify fraud far better than people by themselves can. Depending on the risk tolerance of the enterprise, these systems can either assist senior people whose judgment ultimately prevails, or, the systems just take over the task. Either way, fraud detection is a great example of the kinds of things that we can do with reinforcement learning, with deep neural networks, and so on.

Another example is anything that requires visual identification. For instance, looking at pictures and identifying damages, or identifying intrusions. In the medical domain, it could be looking at radiology, looking at skin cancer identifications, things like that. There are some amazing examples of systems that have done way better than people at many of these tasks. Other examples include security surveillance, or analyzing damage for insurance companies, or conducting specific tasks like processing loans, job applications or account openings. All these are areas where we can apply these techniques. Of course, these applications still have to be built. We are in the early stages of building these kinds of applications, but the technology is already there, in these narrow domains, to have a great impact.

Knowledge@Wharton: What do you expect will be the most significant trends in AI technology and fundamental research in the next 10 years? What will drive these developments?

Sikka: It is human nature to continue what has worked, so lots of money is flowing into ongoing aspects of AI. From chips, in addition to NVidia, Intel, Qualcomm etc., Google, Huawei and others are building their own AI processors and many startups are as well, and all this is becoming available in cloud platforms.  There is tons of work happening in incrementally advancing the core software technologies that sit on top of this infrastructure, like TensorFlow, Caffe, etc., which are still in the early stages of maturity. And this will of course continue.

But beyond this, my sense is that there are going to be three different fronts of development. One will be in building applications of these technologies. There is going to be a massive set of opportunities around bringing different applications in different domains to the businesses and to consumers, to help improve things. We are still woefully early on this front. That is going to be one big thing that will happen in the next five to 10 years. We will see applications in all kinds of areas, and there will be application-oriented breakthroughs.

“The development of AI is asymmetric.”

Two, from a technology perspective, there will be a realization that while the technology that we have currently is exciting, there is still a long way to go in building more sophisticated behavior, building more general behavior. We are nowhere close to building what Marvin [Minsky] called the “society of mind.” In 1991, he said in a paper that these symbolic techniques will come together with the connectionist techniques, and we would see the benefits of both. That has not happened yet.
John [McCarthy] used to say that machine learning systems should understand the reality behind the appearance, not just the appearance.

I expect that more general kinds of techniques will be developed and we will see progress towards more ensemble approaches, broader, more resilient, more general-purpose approaches. My own Ph.D. thesis was along these lines, on integrating many specialists/narrow experts into a symbolic general-purpose reasoning system. I am thinking about and working on these ideas and am very excited about it.

The third area — and I wish that there is more progress on this front — is a broader awareness, broader education around AI. I see that as a tremendous challenge facing us. The development of AI is asymmetric. A few companies have disproportionate access to data and to the AI experts. There is just a massive amount of hype, myth and noise around AI. We need to broaden the base, to bring the awareness of AI and the awareness of technology to large numbers of people. This is a problem of scaling the educational infrastructure.

Knowledge@Wharton: Picking up on what you said about AI development being asymmetric, which industries do you think are best positioned for AI adoption over the next decade?

Sikka: Manufacturing is an obvious example because of the great advances in robotics, in advancing how robots perceive their environments, reason about these, and affect increasingly finer control over it. There is going to be a great amount of progress in anything that involves transportation, though I don’t think we are still close to autonomy in driving because there are some structural problems that have to be solved.

Health care is going to be transformed because of AI, both the practice of health care as well as the quality of health care, the way we build medicines, protein-binding is a great case for deep learning, personalize medicines, personalization of care, and so on. There will be tremendous improvement in financial services, where in addition to AI, decentralized/p2p technologies like blockchain will have a huge impact. Education, as an industry, will go through another round of significant change.

There are many industries that will go through a massive transformation because of AI. In any business there will be areas where AI will help to renew the existing business, improve efficiency, improve productivity, dramatically improve agility and the speed at which we can conduct our business, connect the dots, and so forth. But there will also be opportunities around completely new breakthrough technologies that are possible because of these applications — things that we currently can’t foresee.

The point about asymmetry is a broader issue; the fact that a relatively small number of companies have access to the relatively small talent of people and to massive amounts of data and computing, and therefore, development of AI is very disproportionate. I think that is something that needs to be addressed seriously.

Knowledge@Wharton: How do you address that? Education is one way, of course. Beyond that, is there anything else that can be done?

Sikka: I find it extraordinary that in the traditional industries, for example in construction, you can walk into any building and see the plans of that building, see how the building is constructed and what the structure is like. If there is a problem, if something goes wrong in a building, we know exactly how to diagnose it, how to identify what went wrong. It’s the same with airplanes, with cars, with most complex systems.

“The compartmentalization of data and broader access to it has to be fixed.”

But when it comes to AI, when it comes to software systems, we are woefully behind. I find it astounding that we have extremely critical and extremely important services in our lives where we seem to be okay with not being able to tell what happened when the service fails or betrays our trust in some way. This is something that has to be fixed. The compartmentalization of data and broader access to it has to be fixed. This is something that the government will have to step in and address. The European governments are further ahead on this than other countries. I was surprised to see that the EU’s decision on demanding explainability of AI systems has seen some resistance, including here in the valley.

I think it behooves us to improve the state of the art, develop better technologies, more articulate technologies, and even look back on history to see work that has already been done, to see how we can build explainable and articulate AI, make technology work together with people, to share contexts and information between machines and people, to enable a great synthesis, and not impenetrable black boxes.

But the point on accessibility goes beyond this. There simply aren’t enough people who know these techniques. China’s Tencent sponsored some research recently which showed that there are basically some 300,000 machine learning engineers worldwide, whereas millions are needed. And how are we addressing this? Of course there is good work going on in online education and classes on Udacity, Coursera, and others.  My friend [Udacity co-founder] Sebastian Thrun started a wonderful class on autonomous driving that has thousands of students. But it is not nearly enough.

And so the big tech companies are building “AutoML” tools, or machine learning for machine learning, to make the underlying techniques more accessible. But we have to see that in doing so, we don’t make them even more opaque to people. Simplifying the use of systems should lead to more tinkering, more making and experimentation. Marvin [Minsky] used to say that we don’t really learn something until we’ve learnt it in more than one way. I think we need to do much more on both making the technology easier to access, so more people have access to it, and we demystify it, but also in making the systems built with these technologies more articulate and more transparent.

Knowledge@Wharton: What do you believe are some of the biggest bottlenecks hampering the growth of AI, and in what fields do you expect there will be breakthroughs?

Sikka: As I mentioned earlier, research and availability of talent is still quite lopsided. But there is another way in which the current state of AI is lopsided or bottlenecked. If you look at the way our brains are constructed, they are highly resilient. We are not only fraud identification machines. We are not only obstacle detection and avoidance machines. We are much broader machines. I can have this conversation with you while also driving a car and thinking about what I have to do next and whether I’m feeling thirsty or not, and so forth.

This requires certain fundamental breakthroughs that still have not been happened. The state of AI today is such that there is a gold rush around a particular set of techniques. We need to develop some of the more broad-based, more general techniques as well, more ensemble techniques, which bring in reasoning, articulation, etc.

For example, if you go to Google or [Amazon’s virtual assistant] Alexa or any one of these services out there and ask them, “How tall was the President of the United States when Barack Obama was born?” None of these services can answer this, even though they all know the answers to the three underlying questions. But a 5-year-old can. The basic ability to explicitly reason about things is an area where tremendous work has been done for the last many decades, but it seems largely lost on the AI research today. There are some signs that this area is developing, but it is still very early. There is a lot more work that needs to be done. I, myself, am working on some of these fundamental problems.

Knowledge@Wharton: You talked about the disproportionate and lopsided nature of resource allocation. Which sectors of AI are getting the most investment today? How do you expect that to evolve over the next decade? What do traditional industries need to do to exploit these trends and adapt to transformation?

Sikka: There’s a lot of interest in autonomous driving. There is also a lot of interest in health care. Enterprise AI should start to pick up. So there are several areas of interest but they are quite lumpy and clustered in a few areas. It reminds me of the parable of the guy who lost his keys in the dark and looks for them underneath a lamp because that’s where the light was.

But I don’t want to make light of what is happening. There are a large number of very serious people also working in these areas, but generally it is quite lopsided. From an investment point of view, it is all around automating and simplifying and improving existing processes. There are a few developments around bringing AI to completely new things, or doing things in new ways, breakthrough ways, but there is a disproportionate usage of AI for efficiency improvements and automation of existing businesses and we need to do more on the human-AI experience, of AI amplifying people’s work.

“There simply aren’t enough people who know these techniques.”

If you look at companies like Uber or Didi [China’s ride-sharing service] or Apple and Google, they are aware of what is going on with their consumers more or less in real time. For instance, Didi knows every meter of every car ride done by every consumer in real time. It’s the same with Uber and in China, even in physical retail as I mentioned earlier, Alibaba is showing that real-time connection to customers and integration of physical and digital experiences can be done very well.
But in the traditional world, in the consumer packaged goods (CPG) industry or in banking, telecom or retail, where customer contact is necessary, businesses are quite disconnected from what the true end-user is doing. It is not real time. It is not large-scale. Typically, CPG companies still analyze data that is several months old. Some CPG companies still get DVDs from behavioral aggregators three months later.

I think an awareness of that [lag] is building in businesses. Many of my friends who are CEOs of large companies in the CPG world, in banking, pharmaceuticals and telecom, are trying to now embrace new technology platforms that bring these next generation technologies to life.  But beyond embracing technology, and deploying a few next-generation applications, my sense is, the traditional companies really need to think of themselves as technology companies.

My wife Vandana started and built up the Infosys Foundation in the U.S., and her main passion is computer science education. [She left the foundation in 2017.] She found this amazing statistic that in the dark ages some 6% of the world’s population could read and write, but if you think about computing as the new literacy, today some half a percent of the world’s population can program a computer.

We are finally approaching 90% literacy in the world, and of course we are not all writers or poets or journalists, but we all know how to write and to read, and it has to be the same way with computing and digital technologies, and especially now with AI, which is as big a shift for us as computing itself.

So businesses need to reorient themselves from “I am an X company,” to “I am a technology company that happens to be in X.” Because if we don’t, we may be vulnerable to a tech company that better sees and executes and scales on that X, as we have already seen in many industries. The iPhone wasn’t so much as a phone, as it is a computer in the shape of a phone. The Apple Watch isn’t a watch, but a computer, a smart computing service, in the shape of a watch. The Tesla is not so much an electric car, but rather a computer, an intelligent, connected, computing service, in the shape of a car. So if you are simply making your car an electric one, this is not enough.

“The iPhone isn’t so much a phone as it is a computer in the shape of a phone.”

Too often companies don’t transform, and they become irrelevant. They may not die immediately. Indeed large, successful, complex structures often outlive us humans, and die long slow deaths, but they lose their relevance to the new very quickly. Transformations are difficult. One has to let go of the past, of what we have known, and embrace something completely new, alien to us. As my friend and teacher [renowned computer scientist] Alan Kay said, “We only make progress by going differently than we believe.” And of course we have to do this as individuals as well. We have to continually learn and renew our skills, our perspectives on the world.

Knowledge@Wharton: How should companies measure the return on investment (ROI) in AI? Should they think about these investments in the same way as other IT investments or is there a difference?

Sikka: First of all, it is good that we are applying AI to things where we already know the ROI. I was talking to a friend recently, and he said, “In this particular part of my business, I have 50,000 people. I could do this work with one-fourth the people, at even better efficiency.” In such a situation, the ROI is clear. In financial services, one area that has become exciting is active trading of asset management. People have started applying AI here. One hedge fund wrote about the remarkable results it got by applying AI.

A start-up in China does the entire management of investments through AI. There are no people involved and the company delivers breakthrough results.

So, that’s one way. Applying AI to areas where the ROI is clear, where we know how much better the process can become, how much cheaper, how much faster, how much better throughput, how much more accurate, and so on. But again this is all based on the known, the past. We have to think beyond that, more broadly than that. We have to think about AI as becoming an augmentation for every one of our decisions, every one of the questions that we ask, and have that fed by data and analyzed in real time. Instead of doing generalizations or approximations, we must insist on AI amplifying all of our decisions. We must bring AI to areas where we don’t yet have ROIs clearly identified or clearly understood. We must build ROIs on the fly.

Knowledge@Wharton: How does investment in AI in the U.S. compare with China and other parts of the world? What are the relative strengths and weaknesses of the U.S. and Chinese approaches to AI development?

Sikka: I’m very impressed by how China is approaching this. It is a national priority for the country. The government is very serious about broad-based AI development, skill development and building AI applications. They have defined clear goals in terms of the size of the economy, the number of people, and the leadership position. They actively recruit [AI experts]. The big Chinese technology companies are [attracting] U.S.-based, Chinese-origin scientists, researchers and experts who are moving back there.

In many ways, they are the leaders already in building applications of AI technology, and are doing leading work in technology as well. When you think about AI technology or research, the U.S. and many European universities and countries are still ahead. But in terms of large-scale applications of AI, I would argue that China is already ahead of everybody else in the world. The sophistication of their applications, the scale, the complex conditions in which they apply these, is simply extraordinary. Another dimension of that is the adoption. The adoption of AI technology and modern technology in China, especially in rural areas, is staggering.

Knowledge@Wharton: Could you give a couple of examples of what impressed you most?

Sikka: Look at the payments space — at Alipay, WeChat Pay or other forms of payments from companies like Ping An Insurance, as well as Alibaba and Tencent. It’s amazing. Shops in rural China don’t take cash. They don’t take credit cards. They only do payments on WeChat Pay or on Alipay or others like that. You don’t see this anywhere else in the world at nearly the same scale.
Bike rentals are another example. In the past year, there has been an extraordinary development in China around bicycles.

When you walk into a Chinese city, you see tens of thousands of bicycles across the landscape — yellow ones, orange ones, blue ones. When you look at these bicycles, you think, “This is a smart bicycle.” It is another example of an intelligent, connected computing service in the shape of a bicycle. You just have to wave your phone at it with your Baidu account or your Alibaba account or something like that and you can ride the bike. It has GPS. It is fully connected. It has all kinds of sensors inside it. When you get to your destination, you can leave the bike there and carry on with whatever you need to do. Already in the last nine months, this has had a huge impact on traffic.

“The adoption of AI technology and modern technology in China, especially in rural areas, is staggering.”

If you walk into any of Alibaba’s Hema supermarkets in Beijing and Shanghai, I think they have around 20 of these already, teeming with people, they are far ahead of any retail experiences we see today in the US, including at Whole Foods. The entire store is integrated into mobile experiences, so you can wave your phone at any product on the shelf and get a complete online experience. There is no checkout, the whole experience is on mobile and automated, although there are lots of folks there to help customers. The store is also a warehouse, in fact it serves some 70% of demand from local online customers, and fulfills that demand in less than an hour.

My friend ordered a live fish from the store for dinner and it, that particular fish that he had picked on his phone, was delivered 39 minutes later. Tencent has now invested in a supermarket company. And JD has its own stores. So this is rapidly evolving.  It would be wonderful to see convenience like this in every supermarket around the world in the next few years.

A more recent example is battery chargers. All across China, there are little kiosks with chargers inside. You can open the kiosk by waving your phone at it, pick up a charger, charge your phone for a couple of hours, and then drop it off at another kiosk wherever you are. What I find impressive is not that somebody came up with the idea of sharing based on connected phone chargers, but how rapidly the idea has been adopted in the country and how quickly the landscape has adapted itself to assimilate this new idea. The rate at which the generation [of ideas] happens, gets diffused into the society, matures and becomes a part of the fabric is astounding. I don’t think people outside of China appreciate the magnitude of what is going on.

When you walk around Shenzhen, you can see the incredible advances in manufacturing, electronic device manufacturing, drones and things like that. I was there a few weeks ago. I saw a drone that is smaller than the tip of your finger. At the same time, I saw a demo of a swarm of a thousand or so drones which can carry massive loads collectively. So it is quite impressive how broadly the advance of AI is being embraced in China.

“The act of innovating is the act of seeing something that is not there.”

At the other end of the spectrum, I would say that in Europe, especially in Germany, the government is much more rigorous and thoughtful about the implications of these technologies. From a broader, regulatory and governmental perspective, they seem to be doing a wonderful job. Henning Kagermann, who used to be my boss at SAP for many years, recently shared with me a report from the ethics commission on automated and connected driving. The thoughtfulness and the rigor with which they are thinking about this is worth emulating. Many countries, especially the U.S., will be well served to embrace those ideas.

Knowledge@Wharton: How does the approach of companies like Apple, Facebook, Google, Microsoft and Amazon towards AI differ from that of Chinese companies like Alibaba, Baidu, or Tencent?

Sikka: I think there is a lot of similarity, and the similarities outweigh the differences. And of course, they’re all connected with each other. Tencent and Baidu both have advanced labs in Silicon Valley. And so does Alibaba. JD, which is a large e-commerce company in China, recently announced a partnership around AI with Stanford. There’s a lot of sharing and also competitive aspects within these companies.

There are some differences. The U.S. companies are interested in certain U.S.-specific or more international aspects of things. The Chinese companies focus a lot on the domestic market within China. In many ways, the Chinese market offers challenges and circumstances that are even more sophisticated than the ones in the U.S. But I wouldn’t say that there is anything particularly different between these companies.

If you look at Amazon and Microsoft and Google, their advances, when it comes to bringing their platforms to the enterprise, are further ahead than the Chinese companies. Alibaba and Tencent have both announced ambitions to bring their platform to the enterprise. I would say that in this regard, the U.S. companies are further ahead. But otherwise, they are all doing extraordinary work. The bigger issue in my mind is the gap between all of them and the rest of the companies.

Knowledge@Wharton: Where does India stand in all of this? India has quite a lot of strengths in the IT area, and because of demonetization there has been a strong push towards digitization. Do you see India playing any significant role here?

Sikka: India is at a critical juncture, a unique juncture. If you look at it from the perspective of the big U.S. companies or the big Chinese companies, India is by far their largest market. We have a massive population and a relatively large amount of wealth. So, there is a lot of interest in all these companies, and consequently their countries, towards India and developing the market there. If that happens, then of course the companies will benefit. But it’s also a loss of opportunity for India to do its own development through educating its workforce on these areas.

One of the largest populations that could be affected by the impact of AI in the near-term is going to be in India. The impact of automation in the IT services world, or broadly in the services world, will be huge from an employment perspective. If you look at the growth that is happening everywhere, especially in India, some people call it “jobless growth.” It’s not jobless. It’s that companies grow their revenues disproportionately compared to the growth in the number of employees.

“Finding the problem, identifying the innovation — that will be the human frontier.”

There is a gap that is emerging in the employment world. Unless we fix the education problem it’s going to have a huge impact on the workforce. Some of this is already happening. One of the things I used to find astounding in Bangalore was that a lot of people with engineering degrees do freelance jobs like driving Uber and Ola cabs. And yet we have tremendous potential.

The value of education is central to us in India, and we have a large, young, generation of highly inspired youngsters ready to embrace and shape the future, who are increasingly entrepreneurial in their outlook. So we have to build on foundations like the “India stack,” we have to build our own technological strengths, from research and core technology to applications and services. And a redoubling of the focus on education, on training massive numbers of people on technologies of the future, is absolutely critical.

So, in India, we are at this critical juncture, where on one hand there is a massive opportunity to show a great way forward, and help AI be a great amplifier for our creativity, imagination, productivity, indeed for our humanity. On the other hand, if we don’t do these things, we could be victims of these disruptions.

Knowledge@Wharton: How should countries reform their education programs to prepare young people for a future shift by AI?

Sikka: India’s Prime Minister Narendra Modi has talked about this a lot. He is passionate about this idea of job creators, not just job seekers, and about a broad culture of entrepreneurship.

I’m an optimist. I’m an entrepreneur. I like to see the opportunity in what we have, even though there are some serious issues when it comes to the future of the workforce. My own sense is that in the time of AI, the right way forward for us is to become more evolved, more enlightened, more aware, more educated, and to unleash our imagination, to unleash our creativity.

John McCarthy was a great teacher in my life. He used to say that articulating a problem is half its solution. I believe that in our lifetime, certainly in our children’s lifetime, we will see AI technology advance to the point where any task, any activity, any job, any work that can be precisely formulated and precisely articulated, will be done automatically, far better than we can do with our senses and our muscles. However, articulating the problem, finding the problem, identifying the innovation — that will be the human frontier. It is the act of seeing something that is not there. The act of exercising our creativity. And then, using AI to become a great amplifier, to help us achieve our imagination, our vision. I think that is the great calling of our time. That is my great calling.

Five or six hundred million years ago, there was this unusual event that happened geologically. It was called the Cambrian explosion. It was the greatest creation of life in the history of our planet. Before that, the Earth was basically covered by water. Land had started to emerge, and oxygen had started to emerge. Life, as it existed at that point, was very primitive. People wondered, “How did the Cambrian explosion happen? How did all these different life forms show up in a relatively small period of time?”

What happened was that the availability of oxygen, the availability of land, and the availability of light as a provider of life, as a provider of living, created a situation which formed all these species that had the ability to see. They all came out of the dark, out of the water, onto the land, into the air, where opportunities were much more plentiful, where they could all grow, they could all thrive. People wonder, “What were they looking for?” It turns out they were looking for light. The Cambrian explosion was about all these species looking for light.

When I think about the future, about the time in front of us, I see another Cambrian explosion. The act of innovating is the act of seeing something that is not there. Our eyes are programmed by nature to see what is there. We are not programmed to see what is not there. But when you think about innovation, when you think about making something new, everything that has ever been innovated was somebody seeing something that was not there.

I think the act of seeing something that is not there is in all of us. We can all be trained to see what is not there. It is not only a Steve Jobs or a Mark Zuckerberg or a Thomas Edison or an Albert Einstein who can see something that is not there. I think we can all see some things that are not there. To Vandana’s statistic, we should strive to see a billion entrepreneurs out there. A billion-plus computer literate people who can work with, even build, systems that use AI techniques, and who can switch their perspective from making a living to making a life.

When I was at Infosys, we trained 150,000 people on design thinking for this reason: To get people to become innovators. In our lifetime, all the mechanical, mechanizable, repeatable things are going to be done way better by machines. Therefore, the great frontier for us will be to innovate, to find things that are not there. I think that will be a new kind of Cambrian explosion. If we don’t do that, humanity will probably end.

Paul MacCready, one of my heroes and a pioneer in aerospace engineering, once said that if we don’t become creative, a silicon life form will likely succeed us. I believe that it is in us to refer back to our spirituality, to refer back to our creativity, our imagination, and to have AI amplify that. I think this is what Marvin [Minsky] and John [McCarthy] were after and it behooves us to transcend the technology. And we can do that. It is going to be tough. It is going to require a lot of work. But it can be done. As I look at the future, I am personally extremely excited about doing something in that area, something that fundamentally improves the world.

View at the original source

Behavioral science in business: Nudging, debiasing, and managing the irrational mind 03-05

$
0
0

Behavioral science has become a hot topic in companies and organizations trying to address the biases that drive day-to-day decisions and actions.



Image credit : Shyam's Imagination Library



Although humans are known to be irrational, they are at least irrational in predictable ways. In this episode of the McKinsey Podcast, partner Julia Sperling, consultant Magdalena Smith, and consultant Anna Güntner speak with McKinsey Publishing’s Tim Dickson about how companies can use behavioral science to address unconscious bias and instincts and manage the irrational mind. Employing techniques such as “nudging” and different debiasing methods, executives can change people’s behavior—and have a positive effect on business—without restricting what people are able to do. 

Podcast transcript


Hello and welcome to this edition of the McKinsey Podcast with me, Simon London. It’s not new news that a lot of what drives human behavior is often unconscious and often irrational. We go back to the end of the 19th century and find Sigmund Freud trying to describe our unconscious and intervene on at least what he thought was more or less a scientific basis.
The good news is that our understanding of the unconscious mind has come a long way, grounded in decades of basic research into what drives ordinary, everyday human behavior. These are the biases, the heuristics, the rules of thumb that determine the great majority of our day-to-day decisions without us even being aware. So, yes, we can agree with Freud that we are often irrational, but as today’s behavioral scientists like to say, we are predictably irrational. What can be predicted can be managed, at least to some degree.
Today’s conversation is hosted by my McKinsey Publishing colleague Tim Dickson. You’ll be hearing Tim in conversation with Julia Sperling, who is a neuroscientist by training and a McKinsey partner based in Frankfurt. Tim will also be speaking with Magdalena Smith, an organization and people-analytics expert based in London, and Anna Güntner, who is a consultant based in Berlin. Without further ado, over to Tim.
Tim Dickson: Julia, Magdalena, and Anna, thanks so much for being here today.
Julia Sperling: Great pleasure.
Anna Güntner: Happy to be here.
Magdalena Smith: Thank you for having us.
Tim Dickson: The study of human behavior isn’t really new, and it’s been widely accepted since at least Sigmund Freud that a lot of what drives human behavior is in fact unconscious. So, Julia, what’s new about behavioral science, and why should executives take note?
Julia Sperling: Of course, you’re right. Human psychology has been explored and used for management purposes for the past, I’d say, over 100 years already. You’re also right that Freud gave us a very deep insight into the human mind and how it works. The issue had always been, though, that while Freud’s insights have been very useful, they have been very hard to implement because they were so deep and hard to grasp and hard to alter.
Now we have the insights that people are predictably irrational, but we also have the tools coming out of it to help alter behavior and to help guide behavior. What we use is the insight not only from behavioral sciences but also from neurosciences, most recently.
I can tell you the human brain is spectacular. At any point in time, over 11 million bits of information hit our brain, and it’s able to filter them down to about 50 only. Then seven to ten of them can be kept in short-term memory. Of course, with this enormous filtering exercise that it does, we cannot consciously make choices all the time. A lot has to happen very unconsciously. And, by the way, that’s a very different unconscious from the unconscious that Freud has been talking about.
Tim Dickson: So, Julia, what are the main applications of behavioral science for companies?
Julia Sperling: Well, number one, performance management. You can identify factors that actually hinder performance as well as those that foster it. Money, as we should already know, is not always the best motivator. The second piece is recruiting and succession planting. Here, machine learning has a much stronger ability to predict future success than those that have been, for example, choosing or selecting CVs in the past. And then last, cultures, be it for merger management, a general cultural change that you could see with bringing agility or more diversity to an institution, or something as targeted as introducing a safety culture, for example.
“With nudges—subtle interventions based on insights from psychology and economics—we can influence people’s behavior without restricting it.”
Tim Dickson: Anna, I know you’re an expert on nudging. Can you tell us exactly what nudging is and a little bit of the context for a company thinking about this?
Anna Güntner: The general idea behind nudging as well as debiasing is that people are predictably irrational. Now, with nudges—subtle interventions based on insights from psychology and economics—we can influence people’s behavior without restricting it.
With a nudge, we could get people to do whatever is best for them, without prohibiting anything or imposing fines or restricting their behaviors in any other hard way. In terms of nudging, there are different applications for companies. One certainly is marketing, and marketers have been using similar approaches for a long, long period of time.
Tim Dickson: What do you say if executives are squeamish about this and worry about nudging behaviors—changing behaviors—that may potentially be used for malignant purposes and worry that they might find sensitivities among their employees?
Julia Sperling: It highly depends on what type of nudge is used and the intent with which you use it. It is much more a function of, is the behavior that you’d like to see in your company something that is in line with your company values, that is in line with what your company stands for? That’s the decision executives have to make. Nudging is then merely a technique to make this behavior more likely, but it’s a choice of the behavior that makes the difference.
Anna Güntner: Another area of application, in particular, is safety culture. In terms of irrational thinking, this of course is absolutely something irrational—to risk your life by not sticking to the procedures.
With behavioral science, companies are able to go away from the backward-looking approach, where after something happens, you try to understand what the reasons were and take them out, to something forward looking, where you try to not attack people’s mind-sets but to change the environment in a way that becomes simpler and more intuitive for people to follow safety procedures.
One of the problems that construction companies have is that managers, once they become promoted, stop wearing the helmet, as a sign of superiority to the workers. A nudge that’s implemented by some companies is that the managers get a helmet of a different color. They use the same status bias but in a different way to help people to stick to safety procedures.
Tim Dickson: Understood. So that’s about unleashing particular behaviors. But sometimes you have to fight behaviors and biases. Magdalena, I know that’s something that you know about, and you’ve seen this in action in the workplace. Can you talk about that aspect of the situation?
Magdalena Smith: As Anna mentioned, we’re not always rational, and sometimes that rationality—or lack of rationality, rather—has a real impact on the decisions that we make. That can be extremely costly for organizations.
We have recently worked on an incredibly interesting project, where we worked with a global asset manager trying to identify the decision-making biases that their fund managers have and thereby also see what impact they have on the underlying performance of the funds.
We did that by using the data available in trading and looking at their behavior, looking at individual trades. In combination with this and analyzing the underlying decision-making process in more detail, we could identify which trades were less optimal than others.
Looking at those and looking at the potential improvement of those, if you reduced the effect, it really could show you the direct dollar impact that overcoming these biases had. They were significant. You’re talking about 100 to 200 basis points per year for a fund manager and an extra alpha on an equity fund. That is billions for a company like this over the next three to four years.
“If you want to have a diverse set of leaders in the future, you have to be aware of those little biases and fight them.”
Julia Sperling: I have a lot of clients asking—in particular with regard to their diversity efforts—how they can minimize unconscious bias. It starts with the recruiting processes, behavioral design of how to make them function in a way that doesn’t favor those—we call it a “mini me” bias—who have always been recruited to the company before and would be recruited all the time again. Because again, our human brain is biased, and we enjoy having those that remind ourselves of us around us.
If you want to replicate a homogenous leadership group again and again and again, don’t intervene. But if you want to have a diverse set of leaders in the future, you have to be aware of those little biases and fight them, as we said, right at the start of your recruiting process.
In Germany, together with about 20 other companies, we work in an initiative called Chefsache that wants to bring more women into leadership positions and create gender balance. As one of the focus topics, we looked into unconscious bias within talent processes. When you look into recruiting, for example, even with the best intentions, there was what we talked about—this mini-me bias. People make choices, make biased choices, and might miss out on talent because of those.
One of the debiasing techniques that we use, for example, is that after we’ve seen a case and we have a team speak about what they’ve seen, we now never let the most senior person in the room speak first, because there’s something called the “sunflower” bias, which is once the sun speaks, the flower follows. That means that in this group, people would more likely adopt [the senior person’s position], maybe even a different position from the one that they had before.
Another intervention is to combat the bias that occurs—in recruiting, for example—called groupthink. You make people fill out a statement on the candidate themselves before they enter the group discussions, because science has also shown that once a group starts adopting a certain opinion, it’s very hard for the individuals that haven’t spoken yet to bring in another thought or have another opinion. There we’d say, never let the most senior person in the room speak first. Make sure that everyone notes the opinion right after having seen the recruitment candidate and before sharing their opinion.
Magdalena Smith: One of the areas that is growing very fast within debiasing and within nudging is the concept of advanced analytics and machine learning. That has particularly been used, for example, when it comes to identifying talents, behaviors, and future potentials and very much used in trying to identify who the great performers are going to be in the future and where they can be found.
To follow on in your example regarding recruitment, we’ve seen a global service company that wanted to make the recruitment process more efficient. The way they did this was by acknowledging which type of candidate would automatically go through to a round of interviews.
This automatically put forward the top 5 percent of candidates. One of the very positive side effects of this, which wasn’t actually planned, but it was fantastic, was that the number of women that were put through to the first interviews increased massively.
Tim Dickson: But technology has its own biases as well. What would you say to that?
Magdalena Smith: If we look at what machine learning is, machine learning is trying to find objective insights using data through algorithms, advanced statistical algorithms. Unfortunately, somehow those algorithms have to be programmed, and they’re programmed by humans.
What you very quickly see is that assumptions come into the algorithms. You also see areas where assumptions are made in the sense that you have missing data. You have to impute numbers where you either put a value on it or an assumption that then gets amplified throughout.
Julia Sperling: That’s why you can—and have to—check very carefully whether your algorithms are working. By the way, when we use them in succession planning, for example, or when we use them in recruiting even, we always advise our clients to do a look back in the past and see whether those algorithms, if they have been used already in recruiting, would have predicted the success of those in their positions right now.
Magdalena Smith: Absolutely.
Julia Sperling: Right? So, one has to reality check very carefully every algorithm one puts in place. That’s one very practical example of how to do it.
Tim Dickson: Let’s talk about a different area of application, for example, merger management. I think you’ve seen biases at work and how to counteract them in that situation, Anna.
Anna Güntner: In merger management, the challenge that a lot of mergers—we could even say every merger—faces is that you try to bring together two different cultures and two different corporate cultures and get them to function as one. In that case, there are many biases, especially the in-group out-group bias, that are at play.
But there are also tools—debiasing techniques but also nudging techniques—that can help us prime or create a new common identity. These can be very simple interventions like, for example, if you think about how to bring together new teams. What can you do to force the exchange between people who barely know each other?
Tim Dickson: Julia, you mentioned the context of performance management. Anna, I know you have an example of a counterintuitive insight from that area.
Anna Güntner: In traditional management approaches, we tend to assume that money is the biggest motivator—that if you pay your employees more, then they will work more. Now we know that money is actually the hygienic factor. You have to pay them enough, but there are different things that motivate them, like, for example, meaningful acknowledgment of the social factor and extrinsic motivation. If it’s given for something that in the beginning was not for sale or if it’s too low, it can even reduce intrinsic motivation, like enjoyment or self-fulfillment of work. Also, we know that so-called performance-based teams, where you are paid depending on the result of your work, are actually detrimental for creative work because it makes people think narrowly in a particular direction, whereas for creativity you need to think broadly.
“One of the insights from behavioral economics that a lot of companies are now exploring is to separate developmental feedback from evaluative feedback.”
Another assumption that you would typically have is that you need to give people honest feedback. You need to tell them what they’re doing well, what they’re doing not so well, and how to improve it. But there is a lot of research that shows that people shut off and even try to avoid those from whom they have received such constructive feedback. One of the insights from behavioral economics that a lot of companies are now exploring is to separate developmental feedback from evaluative feedback.
Tim Dickson: Taking a step back and thinking about some of the broader challenges for CEOs and senior executives coming to this for the first time, what would you list as the key challenges?
Anna Güntner: One of the challenges is that you need to adopt the so-called evidence-management mind-set. You need to be ready to test the things that you promote, debiasing algorithms or nudging or anything else, based on large samples of data rather than doing it the way it is usually done—in the past or even today—when a lot of intelligent people get in the room, discuss, and then come out with a decision, which is then rolled out all across the organization.
If we take the example of nudging, it’s rather like running an A/B test. You have one group of people who don’t get exposed to a nudge and the other group of people who get exposed to the nudge. Then you can measure the difference in behavior that hopefully occurs between these two groups and also assess the profit impact.
So that’s one. Number two is that it’s still not very intuitive for many companies to think in terms of behaviors. Very often, we think in terms of KPIs [key performance indicators]—for example, customer satisfaction or sales—so it takes some conscious effort to bring it down to the kind of behavior you’re trying to change.
Julia Sperling: Very often, behaviors are being put into one box together with mind-sets, and core businesses are going to be put into a very different box. Putting those boxes together into one and showing how behaviors—and it’s nothing but behaviors that ultimately drive an outcome in an organization—can be assessed, can be influenced, can be elicited, can be fostered, etcetera, in the same stringent way as some business processes can be new for many executives.
Magdalena Smith: I’d like to add that debiasing is hard. It’s difficult. Just knowing that you have certain biases isn’t sufficient. A lot of people acknowledge that biases have a massive effect on decision making but don’t acknowledge first that they have biases themselves, which is a bias in its own way. That’s overconfidence. Even once you’ve identified a certain bias, you often need some form of external help. For example, in hospitals, they use checklists in order to make sure they don’t miss anything, they don’t make certain assumptions about things. These are props that can help them overcome some of these biases that they may have, or assumptions they make about patients, that are helpful.
There was some very interesting research coming out of the United States last year that showed the number of mistakes that were made in hospitals between the eight years of 2000 to 2009 in taking people in for accidents and emergencies. There were hundreds and thousands of mistakes being done that they specifically put down to biases, the main one being “anchoring” and assuming that they’ve seen the first kind of information that comes, and they stick to that rather than explore any other problems they could have. They estimated that this had an impact of 100,000 lives a year. Being able to save another 100,000 people a year— I think that should be motivation enough to try to use these kinds of methodologies.
Julia Sperling: This is becoming a hot topic more and more. When you look at international institutions, they’re not only starting to deploy those approaches on larger scales. They’re even building their own behavioral-insights unit. They are actively recruiting behavioral psychologists, behavioral economists to work with them. Those units are being built as we speak.
“You need to have a deep understanding of your business and the opportunity to truly understand the precise behavior that leads to the unwanted outcomes.”
Tim Dickson: Is it a question of hiring behavioral economists, or can companies generate an understanding themselves and do this themselves without the very deep academic understanding of this field?
Julia Sperling: It takes a couple of different skills. Number one, it takes a deep understanding of analytics and the ability to use data at scale; as Anna mentioned, do you compare A to B when you do nudging? You need to be able to set up these types of trials and to be able to process them properly. There is an analytical capability that you need to have and you need to build.
Number two, and this might be the even more challenging one, is you need to have a deep understanding of your business and the opportunity to truly understand the precise behavior that leads to the unwanted outcomes or the precise behavior that gives you exactly the outcome that you want. So, you need a deep understanding of your business, the way that your people are currently behaving, and the way you would need them to behave in order to fulfill the strategic and organizational goals that you have.
And then, of course number three, you need these professions that I’ve been talking about before. You need those that come up with a whole library—and McKinsey has one with over 150 different interventions that are linked to certain nudges that have proved to work in companies in the past. You deploy this database, then, to the precise behavior that you’ve identified that yields the business outcome. And you use the analytics to track the impact over time. Those are the three main capabilities that you need to build.
Tim Dickson: I’m afraid that’s all we have time for. But thanks very much to Julia Sperling, Magdalena Smith, and Anna Güntner for a fascinating discussion. Thanks to you, our listeners, for joining us. 

The Airbnb Effect: Cheaper Rooms for Travelers, Less Revenue for Hotels 03-06

$
0
0

Hotels enjoy their highest profits when rooms are most in demand, like during holidays and big events. Unfortunately for them, Airbnb is taking away some of that pricing power, according to new research by Chiara Farronato and Andrey Fradkin. 


Image credit : Shyam's Imagination Library


Airbnb is revolutionizing the lodging market by keeping hotel rates in check and making
additional rooms available in the country's hottest travel spots during peak periods when hotel rooms often sell out and rates skyrocket, a new study shows.

That's bad news for hotels, which have traditionally earned their biggest margins when rooms were scarce and customers were forced to pay higher rates—such as in Midtown Manhattan on New Year's Eve. And it's good news for travelers who don't have to pay through the roof to get a roof over their heads during holidays or for big events.

"The benefits to travelers and the reduction in pricing power of hotels is really concentrated in particular cities during certain times," says Chiara Farronato, a co-author of the study. "When hotels are fully booked, Airbnb expands the capacity for rooms."

Released today, the research shows that in the 10 cities with the largest Airbnb market share in the US, the entry of Airbnb resulted in 1.3 percent fewer hotel nights booked and a 1.5 percent loss in hotel revenue.

The paper, The Welfare Effects of Peer Entry in the Accommodation Market: The Case of Airbnb, was written by Farronato, a Harvard Business School assistant professor, and Andrey Fradkin, postdoctoral fellow at the Initiative on the Digital Economy at the Massachusetts Institute of Technology.

"You might find a Fifth Avenue apartment or a place by the beach at a more reasonable price than you would if Airbnb wasn't an option"

Competition between traditional hotels and Airbnb is intensifying. Last Friday, Airbnb announced it is expanding its "experiences" offerings to an additional 1,000 cities. Meanwhile, the lodging industry is not only adding its own offerings, but stepping up lobbying efforts in local and federal circles for stricter regulations governing Airbnb.

The study focused on data from 2014, and the impact on hotels could be even greater today given Airbnb's strong growth since then.

In addition to access to more rooms, travelers reaped other rewards in places where Airbnb competed with hotels, the study shows. During busy travel times, guests enjoyed an average "consumer surplus" of $57 per night. This surplus didn't necessarily amount to more money in a visitor's pocket, but it did mean better accommodations at more reasonable prices, Farronato explains.

"Consumers don't always pay a lower price," Farronato says. "What changes is the quality of the listings. You might find a Fifth Avenue apartment or a place by the beach at a more reasonable price than you would if Airbnb wasn't an option. Or a listing might have additional amenities, like a kitchen. And if you still prefer a hotel room, competition from Airbnb means you'll pay a lower price for it."

Airbnb's rapid growth


Airbnb, an online community marketplace where people can list and book short-term lodging accommodations around the world, was founded in 2008 and has grown rapidly at a time when plenty of other industry-disrupting platforms have flourished, including Uber, Craigslist, and Spotify.

Airbnb offers listings in 191 countries, and its total number of listings—4 million-—is higher than the top five major hotel brands combined.

To compare the performance of hotels versus Airbnb, the researchers used hotel data from STR, which tracks more than 161,000 hotels, as well as proprietary data provided by Airbnb, creating "the perfect setup to study market competition between new online platforms and traditional service providers," Farronato says. They studied prices and occupancy rates in 50 major US cities between 2011 and 2014, targeting markets with the largest number of hotels.

During the study period, Airbnb made a relatively small dent in the overall short-term accommodations market. Its rooms represented 4 percent of all guests and less than 1 percent of total housing units across all cities. And Airbnb didn't have much effect on hotel occupancy rates overall. Since Airbnb bookings occurred especially when hotels were already near full capacity, a large share of these bookings—between 40 and 60 percent—would not have been made at hotels if Airbnb wasn't an option.

The San Francisco-based home-sharing platform still made its mark on the hotel industry, however. The researchers found that Airbnb's growth through 2014 reduced hotel variable profits by up to 3.7 percent in the 10 US cities with the largest Airbnb presence.

This effect was particularly strong in cities with limited hotel capacity during peak demand days. On those days, hotel room prices were affected relatively more than occupancy rates, meaning that a hotel in one of these cities might still be fully booked during a peak period, but the competition from Airbnb may have forced the hotel to lower its rates for those rooms.

Airbnb rooms were more plentiful in cities with a big demand for accommodations, as well as areas with higher-priced hotels, like New York, Los Angeles, and San Francisco. In other places such as Oklahoma City and Memphis, however, listings were sparse by comparison.

"It's important to note that not all cities are affected by Airbnb," Farronato says. "In Atlanta or Houston, there are enough hotel rooms to satisfy the demand, so peer hosts don't find it attractive to enter the market as much there."

Within each city, more Airbnb rooms cropped up during popular travel periods, such as Christmas and the summer. Sports games, festivals, and other events also led to a spike in listings. In Cambridge, Mass., the biggest listing period came during college graduation time.

And that's the beauty of Airbnb for hosts: They can respond quickly to market conditions, keeping their homes for private use when prices are low and hosting travelers only when the demand for rooms—and the payoff from renting them—is highest.

"As a host, you might not want to risk renting out your place for just $80 a night," Farronato says. "But when the pope comes to Philly, and hotel prices are $200, it becomes worth your while to put your spare room out for rent. Airbnb hosts are in this sweet spot where they can take advantage of only the high-demand periods and stay out of the market at other times."

Hotels fight back


Lodging groups have not taken Airbnb's incursions lightly. Starting in 2016, the American Hotel and Lodging Association backed efforts by the Federal Trade Commission and the state of New York to investigate Airbnb's impact on local housing prices, according to The New York Times. The AHLA also launched a campaign to portray Airbnb hosts as being, in reality, commercial operators looking to compete illegally with hotels.

As margin pressure increases from Airbnb properties over time, hotels will be forced to step up the competition even more. The problem: fixed investment costs. The demand for rooms is always fluctuating, but it's not efficient for hotels to build enough capacity to satisfy the peaks, so they are challenged with finding the right middle ground.

"When the pope comes to Philly, and hotel prices are $200, it becomes worth your while to put your spare room out for rent"

"If you have too much capacity, you will have a lot of empty rooms most of the time," Farronato says. "And if you have too little capacity, you won't be able to satisfy the demand, and Airbnb hosts will come in and drive prices down when demand is high."

Farronato said home-sharing platforms are likely to gain even more ground over time as consumers become increasingly aware of their benefits, so it's important for hotels to find creative ways to compete. At the same time, as cities add home-sharing regulations, both the benefits of Airbnb to consumers and hosts, as well as the effects on hotels, will likely become less pronounced.

Just as Airbnb is adding experience packages to its home-rental offerings, so too are hotels such as Marriott International. And maybe hotels could even find ways to alter their building spaces on the fly to accommodate the peaks and valleys of consumer demand.

"You could have rooms that quickly and dynamically change from hotel rooms into conference rooms. So you can have this flexible capacity of rooms that are available on New Year's Eve, but become conference spaces at other times," Farronato says. "It requires a whole new way of designing things. It's all worth thinking about."
Reproduced from Harvard Business Working Knowledge



Manmohan Singh donates 3,500 books from his personal library to his alma mater 04-12

$
0
0

India's former Prime Minister Dr Manmohan Singh has donated 3,500 books from his personal collection to his alma mater Panjab University (PU).



According to university authorities, the arrangements would soon be made to transport books and memorabilia, photographs and paintings from New Delhi to the university campus.
As per the IANS report, the books and other objects will be kept in the Guru Teg Bahadur Bhawan on the university campus.

Here's what the professor of Department of History told IANS:

"The 3,500 books and memorabilia, which include photographs and paintings, will be housed in Guru Teg Bahadur Bhawan. Until the place is ready for the installation, books and memorabilia will be kept in the main library," she said.
"It will be developed as a library where there would be a reading area where anybody can come and have a look at the material," she added.

India's former Prime Minister Dr Manmohan Singh has donated 3,500 books from his personal collection to his alma mater Panjab University (PU).
According to university authorities, the arrangements would soon be made to transport books and memorabilia, photographs and paintings from New Delhi to the university campus.
As per the IANS report, the books and other objects will be kept in the Guru Teg Bahadur Bhawan on the university campus.

Here's what the professor of Department of History told IANS:

"The 3,500 books and memorabilia, which include photographs and paintings, will be housed in Guru Teg Bahadur Bhawan. Until the place is ready for the installation, books and memorabilia will be kept in the main library," she said.
"It will be developed as a library where there would be a reading area where anybody can come and have a look at the material," she added. 



View at the original source

Can the Minerva Model of Learning disrupt higher Education...04-14

$
0
0



Traditional universities — including Ivy League schools — fail to deliver the kind of learning that ensures employability. That perspective inspired Ben Nelson, founder and CEO of the six-year-old Minerva Schools in San Francisco. His goal is to reinvent higher education and to provide students with high-quality learning opportunities at a fraction of the cost of an undergraduate degree at an elite school. While tuition at top-tier universities in the U.S. can run more than $40,000 a year, Minerva charges $12,950 a year, according to its website. In a recent test, its students showed superior results compared to traditional universities while also attracting a large number of applicants.

Minerva is a disruptor and the traditional university establishment needs to adapt to its model and perhaps improve on it, according to Jerry (Yoram) Wind, emeritus marketing professor at Wharton. Nelson, who was previously president of Snapfish, an online photo hosting and printing service, and Wind spoke to Knowledge@Wharton about why the higher education model needs to change, and how the Minerva model could help.

An edited transcript of the conversation follows.

Knowledge@Wharton: Jerry, where is the future of education headed?

Jerry Wind: The future is now. It has been here for a while, and with Minerva, Ben has recreated the university of the future. Ben, describe briefly the Minerva concept, and then go into the recent findings of the CLA report (Minerva’s Collegiate Learning Assessment test).

Ben Nelson: We refer to Minerva as having been built as an “intentional university.” Everything about the design of the institution, what we teach, how we teach and where we teach it is based on what we know, and through empirical evidence, is effective.

In what we teach, we are classical in our approach, even though we’re [also] modern and progressive in the way we teach. For example, if you think about the purpose of a liberal arts education, or what the great American universities purport to teach, they will say ‘We teach you how to think critically, how to problem-solve, how to think about the way the world works and to be global, and how to communicate effectively.

“Universities … basically teach you academic subject matter and they hope you pick up all of the other stuff by accident.”
–Ben Nelson

When you actually look at how universities attempt to do it, they basically teach you academic subject matter and they hope you pick up all of the other stuff by accident.

We decided to have a curriculum that teaches these things, that breaks down critical thinking, creative thinking, effective interactions, and effective communications into component parts. [We wanted to make] sure that we don’t just teach them conceptually, and don’t just teach them in a context, but actually explain the concept and then have our students apply them actively from context to context to context.

Knowledge@Wharton: Could you share an example of how you do that?

Nelson: One aspect of critical thinking, for example, is evaluating claims. There are various ways of evaluating claims. Sometimes you use logic, sometimes you use reasoning, which is different than logic, sometimes you do statistical analysis which is different than the other two, and sometimes you just think of a counter example.

Now there are different [types] of critical thinking. One example: making a decision tradeoff. Should we go down Path A or Path B? The technique for making a decision tradeoff is perhaps thinking through the cost-benefit analysis, which is a type of critical thinking.

If you say ‘I’m going to teach you critical thinking’ and you just try to teach it as a thing you will never succeed. [It is important to] go through it systemically and do the component parts – that’s the first aspect.

The second aspect is if you teach a person an idea, say evaluation of claims, the mind gets trained in a particular context. When somebody makes a claim, let’s say on an investment opportunity, or a political claim, the mind doesn’t really transfer those skills from one field to another. This is one of the fundamental problems of transferrable education. The way that you teach that is to provide exercise and applications in multiple fields.

How we teach is also radically different. The science of learning shows that the dissemination of information [through] lectures and test-based methodology simply doesn’t work. Six months after the end of a traditional lecture and test-based class, 90% of the material you were supposed to have learned is gone from your mind. In an active learning environment you struggle through information, and two years after the end of the class you retain 70%.

All of our classes, despite [being] small seminars with 15 to 19 students at a time, are done via live video online where there’s a camera pointed at every student’s face. The students are actively engaged with the materials, [and it is] not the professor lecturing — professors are not allowed to talk for more than four minutes at a time. The students get feedback on how they apply what they [learn].
“Six months after the end of a traditional lecture and test-based class, 90% of the material you were supposed to have learned is gone from your mind.”
–Ben Nelson

Lastly [it is about] where we teach. We have created a university that takes advantage of the best the world has to offer. Being a Penn graduate, I always gravitated towards the idea of the urban campus. Our students live in the heart of cities in residence halls together, and have a very strong community. They spend their first year in the heart of San Francisco, but over the next three years across six semesters, as a cohort, as a group, they will travel and live in six different countries. So in their second year they go to Seoul and Hyderabad, and then to Berlin and Buenos Aires, then London and Taipei, and come back to San Francisco for a month to manifest their education and graduate.
Wind: While the concept is appealing, does it work? Describe the CLA test, and then talk about the implications of [your approach].

Nelson: The Collegiate Learning Assessment is provided by a third party nonprofit that has been testing and assessing students’ progress on critical thinking, problem-solving, scientific reasoning and effective communication skills for many years. It’s been administered to hundreds of thousands of students across hundreds of universities. It is administered to students at the beginning of their first year and at the end of their fourth year, and so you can measure [the] progress of students.

We provided [our students] the first-year test just before they started the first class at the beginning of the year. But rather than waiting four years, we gave our students the fourth-year test at the end of their first year, Eight months later, the results shocked us. Not only did our students after eight months have the highest composite score in the country compared to any other university that was assessing their students, the delta improvement they accomplished was higher than what the CLA has seen any university accomplish over four years.

Knowledge@Wharton: What drove those results?

Nelson: The silly answer would be to say, ‘Oh we’re brilliant and we’re great, and look at how amazing what we do is.’ The fact of the matter is we’ve got a lot of room to grow and improve. These results in many ways are much more damning of the existing system than they are generating praise for our brilliance.

We have taken publicly available scientifically published data on how the mind works. We’ve broken down the things that every university says that they teach or that they want to teach, and merely spent time putting together a curriculum that does that, and we’ve offered it to students. We’ve just done what anybody who would rationally approach trying to create a solution to a problem do.

I would bet you that if you had 100 institutions or 100 groups of people that were to do the same thing we would have done from scratch, we would have probably been better than some of them, maybe most of them, but not all of them. There would be some that on their first try would be even better than [us].

Wind: This is the value of idealized design. As opposed to trying to fix the current educational system by adding another course or trying to create a cross-disciplinary course, [Minerva] reexamines the whole purpose of education.

They didn’t go far enough, which is they are still within an academic context, and probably they will relax the academic context that is [with] semesters and the like, and get even better results. But even within this academic context and constraints, what they have done is amazing  –  the curriculum, the concept, and the way it’s developed for the benefit of the learner, and not the benefit of the faculty.

The [first] implication is, if you had a choice and you wanted to go to a university now, where would you go? If you want really great education, go to Minerva; [but if] you want to network, go to one of the top five schools — Penn, Harvard, Princeton, Yale and MIT. Minerva offers probably a different network than the traditional ones because it is a network of people who are willing to do it.

Nelson: Last year, for our third class ever, we received 20,400 applications. That is more applicants than MIT or Dartmouth got. The network you get in a Wharton or Harvard or Yale or what-have-you is [of] a certain kind. It is overwhelmingly American, [with] 80% or 90% from the U.S., and usually from particular socioeconomic backgrounds. Even though there is some diversity, it’s heavily weighted [in favor of that profile].

The Minerva network is radically different because 80% of our students are not from the U.S. — they come from 61 countries. We received these 20,000 applications from 179 countries. The experience and the network you build as you travel and live as a resident in these seven countries is unparalleled. If you want a global footprint, that’s what we provide.

Wind: The current educational system does not work. Implication two is that [universities] have to realize that they are being disrupted. At this stage [it is on a] small scale, but if other universities start adopting it, it can [become] large scale. [Minerva is] the disruptor here, and the signal to the legacy universities is, our model does not work. Stop trying to fix it by adding another Band-Aid, but try to rethink the educational system. And here you have a wonderful blueprint that works.

Nelson: We just wrote a book called Building the Intentional University, which is a blueprint for how other universities can create their own Minervas or reform in that sense. We are a residential university that grants undergraduate degrees with 120 credit hours, with majors and minors and electives and a general education curriculum. We are plug-and-play for universities. We offer potential salvation from disruption.

“The future is now. It has been here for a while, and with Minerva, Ben has recreated the university of the future.”
–Jerry (Yoram) Wind

What I have worried about is the other kind of disruptive force that can attack universities [and be] destructive, in the sense that in six months you get a high school degree, go to a boot camp and then get a six-figure job being a software programmer. We have put together an educational experience that enables university graduates to be better prepared than [with that] six-month boot camp. Because they are able to do higher level problem solving, they are going to be [software] architects as opposed to the programmers. They’re going to the ones that in a world of Watson and artificial intelligence and outsourcing are going to be much more future-proof.

Wind: An increasing number of people view employability as being critical, and a traditional university degree does not guarantee employability, [but] the new non-degree programs guarantee you a [job] position.

Knowledge@Wharton: Three or four years ago, a big potential disruptor was the so-called MOOC, or the Massive Open Online Course. A number of platforms came up [such as] Coursera, Udacity and EdX. It seemed like they were going to be disruptive, but that doesn’t seem to have happened. What happened with that so-called disruption and why did it fail?

Nelson: The jury is still somewhat out on that, and let me give you an example of what I think is happening on the surface. MIT had a master’s program in supply chain logistics, and it cost $60,000 for a two-semester program. As an experiment, [they put the] first semester on MOOCs, and rather than charging $30,000 for it, [gave] it away for free. If you want to get credit for it pay $250, [write] an exam, and then if you score well you [go] to campus, do a one-semester supplement, pay $30,000 and get a master’s degree.

This [halves] the cost of higher education for a master’s degree. Imagine if the Ivy League – or any university – [extended that to] all the courses they give academic credit for. Of the $250,000 that they are used to collecting and are reliant on [for each degree course, they] can only collect $100,000 because $150,000 is effectively given away for free. So far no university has an incentive to rock the boat too much on this. [However,] just because the disruption does not happen immediately doesn’t mean it won’t happen.

Wind: The concern is that especially for the leading universities, it’s an excuse not to innovate. They are saying, ‘Look how innovative we are; we have MOOCs, or we offer classes on Coursera,’ and basically the rest of the education stays exactly the same way as it was before. Some of the findings suggest that less than 5% of the people who start ever finish the courses on Coursera or EdX. But there are some encouraging signs that if you add to the traditional Coursera course or EdX interaction, and if you provide some more gamification principles in terms of getting involved, you can increase the numbers significantly.

The advantage of this — with MIT, Stanford, Penn and other universities putting all of these courses online — is that the role of the faculty becomes easier as a curator. This is the fundamental change that we have to see in education.

Knowledge@Wharton: [In addition to] a network, one other factor that the Ivy League universities offer is the brand. When you have this innovative model like Minerva, how do you establish a brand that is acceptable to students as well as employers?

Nelson: Minerva was built as a positive brand. When you meet somebody at Minerva you know that they have … been given systematic frameworks of analysis that they can apply effectively to the rest of the world. Our challenge is to propagate that brand, to get people aware of it. The good news is that the internet is a very good way of disseminating information. Brand building in today’s world doesn’t take centuries; it doesn’t even take decades.

Wind: The final word on branding is always [from] the consumer. One, the best carrier of the brand, and especially on the positive side, would be the alumni. So the value of the degree, the value of the Minerva experience is a function of how good the alumni are. Two, a lot [depends] of the employability and demand for the Minerva students.

Nelson: It’s too early to tell.

A Chink in Bacteria's Armor 04-22

$
0
0


Building the bacterial wall: The blue balls are wall-making proteins. The yellow represents a newly synthesized bacterial cell wall. The green color represents "scaffolding" proteins. Video: Janet Iwasa for Harvard Medical School..


The wall that surrounds bacteria to shield them from external assaults has long been a tantalizing target for drug therapies. Indeed, some of modern medicine’s most reliable antibiotics disarm harmful bacteria by disrupting the proteins that build their protective armor. 


For decades, scientists knew of only one wall-making protein family. Then, in 2016, a team of Harvard Medical School scientists discovered that a previously unsuspected family of proteins that regulate cell division and cell shape had a secret skill: building bacterial walls.

Now, in another scientific first described March 28 in Nature, members of the same research team have revealed the molecular building blocks—and a structural weak spot—of a key member of that family.
“Our latest findings reveal the molecular structure of RodA and identify targetable spots where new antibacterial drugs could bind and subvert its work,” said study senior investigator Andrew Kruse, associate professor of biological chemistry and molecular pharmacology at Harvard Medical School.
The newly profiled protein, RodA, belongs to a family collectively known as SEDS proteins, present in nearly all bacteria. SEDS” near-ubiquity renders these proteins ideal targets for the development of broad-spectrum antibiotics to disrupt their structure and function, effectively neutralizing a range of harmful bacteria.
A weak link
In their earlier work, the scientists showed that RodA builds the cellular wall by knitting together large sugar molecules with clusters of amino acids. Once constructed, the wall encircles the bacterium, keeping it structurally intact, while repelling toxins, drugs and viruses.
The latest findings, however, go a step further and pinpoint a potential weak link in the protein’s makeup.
Specifically, the protein’s molecular profile reveals structural features reminiscent of other proteins whose architecture Kruse has disassembled. Among them, the cell receptors for the neurotransmitters acetylcholine and adrenaline, which are successfully targeted by medications that boost or stem the levels of these nerve-signaling chemicals to treat a range of conditions, including cardiac and respiratory diseases.
One particular feature caught the scientists’ attention—a pocket-like cavity facing the outer surface of the protein. The size and shape of the cavity, along with the fact that it is accessible from the outside, make it a particularly appealing drug target, the researchers said.
“What makes us excited is that this protein has a fairly discrete pocket that looks like it could be easily and effectively targeted with a drug that binds to it and interferes with the protein’s ability to do its job,” said study co-senior author David Rudner, professor of microbiology and immunobiology at Harvard Medical School.
In a set of experiments, researchers altered the structure of RodA in two bacterial species—the textbook representatives of the two broad classes that make up most of disease-causing bacteria. One of them was Escherichia coli, which belongs to a class of organisms with a double-cell membrane known as gram-negative bacteria, so named due to a reaction to staining test used in microbiology. The other bacterium was Bacillus subtilis, a single-membrane organism that belongs to so-called gram-positive bacteria.
When researchers induced even mild alterations to the structure of RodA’s cavity, the protein lost its ability to perform its work. E. coli and B. subtilis cells with disrupted RodA structure rapidly enlarged and became misshapen, eventually bursting and leaking their contents.
“A chemical compound—an inhibitor—that binds to this pocket would interfere with the protein’s ability to synthesize and maintain the bacterial wall,” Rudner said. “That would, in essence, crack the wall, weaken the cell and set off a cascade that eventually causes it to die.”
Additionally, because the protein is highly conserved across all bacterial species, the discovery of an inhibiting compound means that, at least in theory, a drug could work against many kinds of harmful bacteria.
“This highlights the beauty of super-basic scientific discovery,” said co-investigator Thomas Bernhardt, professor of microbiology and immunobiology at Harvard Medical School. “You get to the most fundamental level of things that are found across all species, and when something works in one of them, chances are it will work across the board.”
Solving for X
To determine RodA’s structure, scientists used a visualization technique known as X-ray crystallography, which reveals the molecular architecture of protein crystals based on a pattern of scattered X-ray beams. The technique requires two variables—the intensity of scattered X-rays and a so-called “phase angle,” a property related to the configuration of the atoms in a protein. The latter is measured indirectly, typically by using a closely related protein as a substitute to calculate the variable.
In this case, however, the team had on its hands a never-before-described protein with no known molecular siblings.
“In most cases you can use a related structure and bootstrap to a solution,” Kruse said. “In this case, we couldn’t do that. We had to predict what RodA looked like without any prior information about it.”
They needed a new way to solve for X.
In a creative twist, researchers turned to evolution and predictive analytics. Working with Debora Marks, assistant professor of systems biology at Harvard Medical School, they constructed a virtual model of the RodA’s folding pattern by analyzing the sequences of its closest evolutionary cousins.
The success of this “roundabout” approach, researchers said, circumvents a significant hurdle in field of structural biology and can open the doors toward defining the structures of many more newly discovered proteins.
“These insights underscore the importance of creative crosspollination among scientists from multiple disciplines and departments,” said study first author Megan Sjodt, a research fellow in biological chemistry and molecular pharmacology at Harvard Medical School. “We believe our results set the stage for subsequent work toward the discovery and optimization of new classes of antibiotics.”
The work was supported by National Institutes of Health grant U19AI109764.
Co-investigators included Kelly Brock, Genevieve Dobihal, Patricia Rohs, Anna Green, Thomas Hopf, Alexander Meeske, Veerasak Srisuknimit, Daniel Kahne and Suzanne Walker, all from Harvard

Silicon Valley is going back to an ancient technology: People 04-23

$
0
0



Full automation has proven that humans are still better than robots at some tasks. 



Tech companies have long been valued by investors for their ability to replace employees with technology. Now, alongside software and server farms, they are moving at a breakneck pace on find living, breathing human beings to staff their systems.

They’re doing so because of a high-profile series of failures of automation, which have prompted a wave of intense pressure from investors, the public, and governments.

Tesla’s highly automated production line failed to produce cars at the rate CEO Elon Musk promised, prompting questions about the electric-car maker’s solvency. Systems at Google’s YouTube failed to flag extremist and exploitative videos. Russian operatives have worked to influence elections using Facebook, whose systems separately created categories of users with labels such as “Jew hater” that it then allowed advertisers to target.

While companies such as Google and Facebook still insist that they’re just distribution platforms rather than content creators and bear limited, if any, responsibility for most of the content they host, they’re increasingly acknowledging they need to do something to curb abuses. In the short-term at least, that approach usually involves more humans.

“Human are underrated,” tweeted Musk, as the company struggles to ramp up production of its Model 3 sedan. Musk has blamed an overly automated production process. “We had this crazy, complex network of conveyor belts… And it was not working, so we got rid of that whole thing,” he told CBS.

Meanwhile, Google and Facebook have been hiring thousands of people to monitor content and advertising on their platforms, amid backlash against their hosting of extremist videos and messages, videos depicting the exploitation of children, propaganda, and content created to manipulate electorates in the US and elsewhere.

Facebook CEO Mark Zuckerberg reiterated to US legislators last week that the company planned to double its security and content moderation workers to 20,000 people by the end of the year—an investment that he acknowledged would hurt its profitability.

YouTube CEO Susan Wojcicki in December said the Google-owned video site aimed to have 10,000 people working to find and combat content that violates its policies, a 25% increase according to BuzzFeed.

Artificial-intelligence experts say Zuckerberg and other tech executives are over-optimistic about the timeline for computers identifying things such as toxic speech, and point to existing systems that fail at that task. A new Barclays research report says that humans are better than robots at “sensorimotor skills” and “cognitive functionality,” meaning humans are less clumsy than robots and are better at making decisions factoring in context and in cases where there’s incomplete information. There are reasons to be confident that humans will retain some of those advantages for decades into the future.

But any surge in hiring by tech companies is unlikely to significantly offset the toll on employment from the current wave of automation. And the jobs that such companies are hiring for at scale—such as people to watch videos for offensive content—tend to require lower skills, and pay lower wages.




The dawn of precision medicine 04-23

$
0
0



Linnea Olson tells her story—of repeatedly facing death, then being saved by the latest precision therapy—articulately and thoughtfully, agreeing to discuss subjects that might otherwise be too personal, she says, because it could benefit other patients. She lives in an artist cooperative in Lowell, Massachusetts, in an industrial space, together with her possessions and artwork, which fill most of an expansive high-ceilinged room. Olson is tall, with close-cropped, wavy blonde hair, and dresses casually in faded blue jeans. Although she has an open, informal style, this is paired with a natural dignity and a deliberate manner of speaking.
“I had a young doctor who was very good,” she begins. “I presented with shortness of breath and a cough, and also some strange weakness in my upper body. And he ordered a chest x-ray.” Years later, she saw in her chart that he had written, “On the off chance that this young, non-smoking woman has a neoplasm”—the beginnings of a tumor in her left lung. But he didn’t mention that to her, and “he ended up getting killed on 9/11—he was on one of the planes that hit the towers.”
The national tragedy thus rippled into Olson’s life. Never suspecting that her symptoms could be caused by cancer, she spent the next several years seeking a diagnosis. A string of local doctors told her it was adult-onset asthma, hypochondria, then pneumonia. When antibiotics didn’t clear the pneumonia, a CT scan showed a five-centimeter mass in her left lung: an infection? Or cancer? It was the first time she had heard that word. The technicians told her that at 45, she was too young for that. But a biopsy confirmed the diagnosis. “In 2005, when you told someone they had lung cancer,” a doctor later told her, “you were basically saying you were sorry.” Her youngest son was seven at the time. Olson wanted to live.
Now, 13 years later, she is alive and healthy, a testament to the potential of precision medicine to extend lives. But like precision medicine itself, her story encapsulates the best and worst of what medicine can offer, as converging forces in genetics, data science, patient autonomy, health policy, and insurance reimbursement shape its future. There are miraculous therapies and potentially deadly side effects; tantalizing quests for cures that come at increasingly high costs; extraordinary advances in basic science, despite continuing challenges in linking genes implicated in disease to biological functions; inequities in patient care and clinical outcomes; and a growing involvement of patients in their own care, as they share experiences, emotions, and information with a global online community, and advocate for their own well-being.
Precision medicine is not really new. Doctors have always wanted to deliver increasingly personalized care. The current term describes a goal of delivering the right treatment to the right patient at the right time, based on the patient’s medical history, genome sequence, and even on information, gathered from wearable devices, about lifestyle, behaviors, or environmental exposures: healthcare delivered in an empiric way. When deployed at scale, this would, for example, allow doctors to compare their patient’s symptoms to the histories of similar patients who have been successfully treated in the past. Treatments can thus be tailored to particular subpopulations of patients. To get a sense of the promise of precision medicine—tantalizingly miraculous at times, yet still far from effective implementation—the best example may be cancer, which kills more than 595,000 Americans each year.

Patient 4

In some cases, cancer can be driven by a small number of genes—even a single gene—that can be identified and then targeted. Even in cancers with many mutations, genetic profiling makes it possible to unambiguously distinguish between tumor cells and healthy tissues. That is a great boon in a disease that essentially hijacks the patient’s own biology. Genome sequencing, by precisely defining the boundary between self and non-self, can even enable immunotherapies that kill cancer cells but not others. Still, state-of-the-art precision cancer medicine is something like the surgical airstrikes of the 1960s: vastly better than the carpet-bombing of chemotherapy, but not without risk of collateral damage.
In 2005, when Olson was diagnosed with lung cancer, surgery, chemotherapy, and radiation—so-called cut, poison, and burn therapies—were the frontline treatments. A friend’s husband, a surgeon, recommended that she go to Massachusetts General Hospital (MGH) for the lobectomy that would remove the lower lobe of her left lung. When she woke from surgery, an oncologist, Thomas Lynch, was standing at the foot of her bed. He was running a clinical trial of an experimental drug he’d helped develop, and she fit the profile of a patient who might benefit.
Lung cancer is rare before 45, and most common after 65: the average age of patients diagnosed with the disease in the United States is 70, and the cancers themselves are typically loaded with random mutations, caused by repeated, long-term exposures to airborne toxins, as might occur after a lifetime of smoking. But Olson was young and had never smoked. This meant that her cancer was likely being caused not by many mutated genes, but by a single “driver” mutation. There are now eight well-established driver mutations for the disease. Lynch hoped that Olson would have one called EGFR (epidermal growth factor receptor), the only one then known. But she didn’t.
Lynch explained to her that cancer outcomes traced a bell curve. At one end were those patients who did poorly. Most were in the middle. But at the other end were the outliers, those who lived a long time. “ ‘Tell me about the outliers,’ ” she recalls asking him—“almost like it was a fairy tale.” She was floundering, she says, as she faced post-surgical chemotherapy, dreading its cytotoxic effects. Lynch persuaded her not to give up. “We’re going to take you to the brink of death,” he told her, “but we’re trying to cure you.” She read Lance Armstrong’s book, It’s Not About the Bike, as she went through four rounds of treatment.“It is horrible,” she says, looking back on it. But “I’d get on my little exercise bike and say, ‘I am Lance Armstrong. I can do this.’”
The tumor was unchanged by the chemotherapy. As months passed, Lynch referred to the growing numbers of nodules in her lungs as “schmutz”—never as cancer. He was trying to keep her hope alive.
In 2008, her symptoms returned, and worsened. Her cancer had progressed to stage IV. In a last-ditch effort, Lynch put her on Tarceva, the targeted therapy for EGFR, anyway, “just in case the genetic test had missed something,” he later explained. But as Olson recalls, “I experienced all of the side effects and none of the benefits.” She asked him how long she had to live. “Three to five months,” he told her. “Should I get my affairs in order,” she asked? “Yes,” he said. In distress, she told a social worker to whom she had been referred, ‘I need you to help me learn how to die.’ And instead, she’s really helped me learn how to live.”
It turned out that even though Olson didn’t have the EGFR mutation, genetic testing done when she started taking Tarceva revealed that she had a different single-driver mutation, ALK, for which a phase 1 clinical trial had just begun. Lynch asked if she wanted to participate in this effort to determine optimal dose, side effects, and efficacy. Patient 1, he told her, had appeared to respond to the therapy, but then died—in part because of it. Olson didn’t want to hasten her own death, but reasoned that doing nothing, she would soon die anyway. She signed on as Patient 4.
Within days, she felt better. The side effects were mild. At the seven-week mark, she saw Lynch to review scans of her lungs. What had looked like a blizzard was completely gone. “I went from accepting that I was going to die, to ‘Oh my God, I’m going to live a little while longer,’” says Olson. “It was like a fairy tale.” Lynch made it very clear that this did not represent a cure, and that there was nothing after this. Eventually, he told her, there would be secondary mutations. But she’d been given another chance.
Professor of medicine Alice Shaw, a physician-scientist at MGH who has been working on ALK and its secondary mutations for 10 years, has been Olson’s oncologist since 2009. Lung-cancer treatment has progressed substantially in the last decade, she says, so that molecular profiling of patient tumors is now standard care. Patients eligible for a targeted therapy skip chemotherapy.
EGFR, the first targetable oncogene (a gene with the potential to cause cancer), was discovered in lung cancer in 2004. “The EGFR gene is mutated in about 10 percent to 15 percent of lung-cancer patients in this country,” Shaw says. Olson’s ALK mutation (technically, a chromosomal rearrangment) discovered in lung cancer in 2007, is present in about 5 percent of patients. There are numerous driver mutations for this disease, seven of which can be turned off with new targeted therapies, which work for about 30 percent of U.S. lung-cancer patients—many of whom can return to their normal lives because the pills are fast-acting and don’t cause as much collateral damage as chemotherapy.
That is something that should be considered, Shaw says, when weighing the costs of targeted drugs, which run about $15,000 a month for as long as the patient is responding. “Obviously, $180,000 a year is an enormous cost. The question is, how do you weigh these costs, in light of the life-saving benefits of these drugs?” Some of the newest treatments for lung cancer, such as immunotherapies (see “The Smartest Immunologists I Know,” below) are as expensive as targeted therapies, she reports. And traditional chemotherapy often keeps patients out of work, and sometimes leads to hospitalization—costly outcomes. By contrast, targeted therapies allowed Olson to live relatively normally and raise her youngest son, now 20 and an undergraduate at MIT.

Finding Five Unknown Variables

Miraculous as they are at their best, targeted therapies do not work forever. That’s because genomic instability is one of the defining features of cancer. “I went a full glorious year before I started to have some progression,” Olson recalls. At that time, in 2009, when the cancer began growing again, patients knew they would soon have to leave the ongoing trial. That could have been the end for Olson. But because she had no symptoms from the early progression, and felt well, she was permitted to stay on the experimental drug for almost three years. Then a second ALK inhibitor opened in a phase 1 clinical trial. Fortunately for Olson, the drug was active against ALK S1206Y, the resistance mechanism that had developed in her cancer’s ALK gene, and it bought her 15 more months (although she suffered gastrointestinal side effects as well as liver toxicity, for which she had to be briefly hospitalized). Her therapy has carried on this way, a continuing cascade of genetic analyses as the cancer adapts, and then a new therapy, just in time to save her. The alternative—standard chemotherapy and radiation—typically extends lung cancer patients’ lives by just three to six months.
The development of resistance is less a reflection of the efficacy of targeted therapeutics than of the cancer’s ability to evolve. Cancer cells proliferate through division, and mutate rapidly. If a single cancer cell among millions happens to be resistant to a particular therapy, that cell and its progeny eventually become dominant drivers of the patient’s disease. Shaw studies these mechanisms of resistance; once pathologists sequence tumors, the scientists can identify the mutations and develop models of them, she explains. Working with pharmaceutical companies, the researchers test newer drugs against these mutations to see if the therapies are active. Now that there are several inhibitors for EGFR and ALK mutations, Shaw says, she and her colleagues are beginning to explore combination therapies, hoping to stop the cancer before it becomes more complex in response to single-drug treatments.
Combination therapies are critical against cancer, agrees Peter Sorger, Krayer professor of systems biology and director of Harvard Medical School’s (HMS) Laboratory of Systems Pharmacology (see “Systematic Drug Discovery,”  July-August 2013, page 54). He and his postdoctoral fellow Adam Palmer find that many combination therapies are superior to single drugs across a wide range of solid tumors because of tumor heterogeneity. Heterogeneity arises from genetic differences among cells in a single patient and among tumors in different patients; it likely explains why a particular anti-cancer drug can be effective in some patients but ineffective in others with the same type of cancer.
In fact, a graph of patient responses traces a bell curve with a long tail: many patients respond only partially, but some do very well (they lie out on the tail). Combination therapies improve rates of success in patient populations (and clinical trials) in this view simply by increasing the odds that a patient will lie out on the tail. In other words, combination therapy overcomes ignorance of which drug will work best in a specific patient; this is true even when a targeted therapy is given to genetically selected populations.
Such bet-hedging is a case of the glass being half full, Sorger says: “existing combinations have taken untreatable disease in which a metastatic case means you die, to one in which a quarter or more of patients are doing well. At the same time, the large impact of unknown variables is the measure of how far we have to go in cancer pharmacology.”
How do we reconcile this statistical view of responsiveness to cancer therapy with the precise molecular experiments that Shaw and her colleagues are using to design combination therapies for cancers carrying EGFR, ALK, and other mutations? Sorger and Palmer propose that high variability in response to anti-cancer therapy arises because multiple mutations are involved—perhaps six or more in each cancer cell—many of which are unknown. “If we knew all the relevant genes determining drug response in a particular patient, we could be highly predictive, and able to tailor a therapy for each patient,” Sorger says. The studies Shaw has underway are necessary to make such prediction possible in the future. Moreover, in some cases there is evidence that combination therapies can be much more effective than the sum of their parts; there is currently no systematic way to find such combinations at the moment, but they are well worth pursuing. Both Sorger and Shaw agree that, as precision medicine improves and scientists identify the spectrum of mutations involved in drug response, it will be increasingly possible for physicians to tailor therapy to an individual patient’s needs.
Todd Golub, professor of pediatrics and director of the cancer program at the Broad Institute of MIT and Harvard, is part of an ambitious project to find those several targetable genes—and an estimated 10,000 more like them. The aim of cancer treatment, he says, ought to be the use of molecular analysis to make predictions about what the best therapy should be for each patient, for all types of cancer—the ultimate goal of personalized, precision medicine. He and his Broad colleagues are at work on the “cancer dependency map.” Their goal is to identify all the genes that are unique to cancers, on which any cancer depends for growth—the “Achilles heels” of the disease.
Their first challenge is to gather the broadest range of cancer-tissue samples they possibly can. Paired with this effort to collect patient information is a laboratory project to create model cancer cell lines and to test all FDA-approved drugs and drugs that are in clinical development—on the order of 5,000 compounds—against them. “You can’t do that in a patient,” notes Golub. Seeing which compounds are effective against these cancers allows researchers to identify those Achilles-heel genes. “That allows us to create a roadmap for drug developers, so that eventually, we will have a full medicine cabinet to make this concept work,” he explains. Of course there are challenges: some therapeutic targets are critical for normal cells, too. “But we are learning,” he adds, “that in some cases, [inhibiting] the function of a target 24/7 can be horribly toxic, but when therapies are used transiently, tumor cells die, and normal cells don’t.”
The Broad effort is at the beginning stages, with just 500 cancer cell lines, heavily biased toward European ancestry. The fact that whole ethnicities are missing is a measure of how far they have to go. “We’re not going to get there in one fell swoop,” Golub explains. “We’ll get there by keeping people alive longer and longer, until eventually, it becomes a numbers game where the goal is to eradicate all the tumor cells and leave none behind that have drug resistance mechanisms that allow them to escape.” With a complete cancer dependency map, and the molecular profile of a given cancer, physicians could “identify the five drugs predicted to be effective against that tumor. We would put together combinations of drugs that don’t share common susceptibilities to resistance, and unless you had a tumor the size of Manhattan,” there would be no way for the cancer to get around that combination. “We won’t get there during my career for most patients. But for the next generation, I think it is not crazy.”
What Golub is describing is a rational, systematic approach to building a complete arsenal of targeted drug therapies like those that have extended Linnea Olson’s life and the lives of many other patients. Instead of using them serially to extend life, though, he imagines combination therapies that would effect cures. But there is another approach that might yield results for some patients even sooner.

“The Smartest Immunologists I Know”

Immunotherapy is the maverick of cancer research and clinical care, a relatively new strategy in treatment with the potential to cure certain types of cancer now. Harnessing patients’ immune systems to fight cancer represents an approach radically different from that used in targeted drug therapy. There are three distinct techniques: training the immune system using personalized vaccines; reawakening immune cells by stimulating them to recognize cancers through the use of drugs; and engineering a patient’s T-cells outside the body so they will recognize cancer cells and then reinserting those T-cells in patients.
In what may turn out to be the ultimate precision medicine, married professors of medicine Catherine Wu, an oncologist at Dana-Farber Cancer Institute (DFCI), and Nir Hacohen, director of MGH’s Center for Cancer Immunology and co-director of the Broad Institute’s Center for Cell Circuits, have together created personalized cancer vaccines that train the immune system to recognize and destroy cancer cells. In a small clinical trial, they created personalized vaccines for each of six melanoma patients, and let their immune systems do the rest.
The process works by training T-cells, white blood cells that are the immune system’s weapons for identifying and destroying infected tissue, to recognize cancer. Instead of targeting driver mutations, as targeted therapies do, this approach teaches the immune system to recognize random mutations. As Hacohen explains, half of cancer tumors have defects in DNA repair, so tumors develop a lot of random mutations, and the mutated proteins are visible, on cell-surface receptors, to T-cells. “The fact that there is almost no overlap” in these mutations between patients, he explains, “is what makes this approach personalized.” Hacohen and Wu design the vaccines by first analyzing a patient’s immune system, then analyzing her tumor, and finally creating a vaccine that will stimulate her T-cells to bind to a set of perhaps 20 different mutated proteins on tumor-cell surfaces. The trick is to create a vaccine that mimics the mutated proteins. When injected into patients, the immune system recognizes these foreign invaders, and stimulates T-cells that proliferate, recognize, and attack those same mutated proteins on cancer cells. Normal cells, because they don’t have such mutations, are spared.
In each case, radiology of these patients several years later shows no recurrence of disease. Hacohen is reluctant to generalize about the success rate based on such a small sample, but he does note that two other groups (one based at Washington University in St. Louis, one in Germany) have had similar success in trials of cancer vaccines.
Because this approach targets mutations, it is ideally suited for tumors such as smoker’s lung cancer, or melanoma, in which chronic exposure to carcinogens (UV light in the case of melanoma) has driven lots of mutations, creating a genetically noisy landscape. That is because the more genetically complex a tumor is, the more likely the immune system will recognize it as a foreign invader and try to eradicate it. Hacohen’s labs focus on basic immunology, genomics, and systems biology—what he terms “biological equations” that help distinguish cancer cells from healthy ones. Combining his three fields allows him to do the whole-body analysis necessary to distinguish healthy tissue from the foreign molecules on the surface of cancer cells that the immune system can recognize. But Hacohen is a pure researcher; he doesn’t see patients. Wu, an oncologist, does and can run FDA-approved trials with DFCI oncologists to test the vaccines in patients. The combined expertise of this husband-and-wife team is necessary to complete these extremely specialized therapies.
Because this type of therapy is not yet commercially available, the eventual market cost of creating custom vaccines is hard to estimate. At the moment, Hacohen explains, the sequencing of individual patients and their respective tumors costs about $5,000 each, but that price is dropping rapidly. Even the computation required to design a tailored vaccine is relatively limited. What does cost a great deal right now, he says, is manufacture of the resulting vaccine, largely because of all the safety mechanisms that must be satisfied before any custom therapy is deployed in a human patient. That engineering alone might cost upward of $100,000. But this price, too, could fall as personalized vaccine development becomes more widely practiced.
A second approach involves reawakening the immune system. In the same way that cancer evolves to resist drugs, it evolves to evade the body’s natural defenses. As cancer begins in a patient, the immune system targets and kills any tumor cells it sees—but left behind to proliferate are the cancer cells that evade the immune system. Immunology researchers like Fabyan professor of comparative pathology Arlene Sharpe have therefore been working to elucidate how cancer disguises itself. Sharpe, who is interim co-chair of the microbiology and immunology department at HMS, heads the cancer immunology program at the Dana-Farber Harvard Cancer Center and co-directs the Evergrande Center for immunologic diseases at HMS and Brigham and Women’s Hospital. She has collaborated with her husband, professor of medicine Gordon Freeman, a molecular biologist and DFCI researcher, to study those pathways.
A key mechanism for defeating cancer’s evasion of T-cell attacks is “checkpoint blockade therapy,” on which Sharpe and Freeman have done much of the basic research. This approach reawakens the immune system to the presence of tumor cells. The surface of cancer cells often display molecules that bind to the inhibitory receptors, known as checkpoints, on T-cells. This stops the T-cells from attacking and killing the tumor.
In normal immune function, Sharpe explains, these inhibitors are critical because they are, in effect, dials that modulate the immune response, turning its sensitivity to foreign objects up or down. Autoimmune diseases such as type 1 diabetes, in which T-cells destroy the pancreas after mistaking it for a foreign invader, illustrate why these inhibitory mechanisms are so important biologically; they prevent the immune system from attacking healthy tissues. But cancer often cloaks itself in molecules that block the immune response. The result is that “the immune cycle often doesn’t work well in cancer patients,” says Sharpe. “Tumors are the smartest immunologists I know.”
But drugs can block these inhibitors, by targeting either their receptors on T-cells or binding partners on the surface of cancer cells. Then, the immune system can suddenly “see” tumors, enabling it to target and destroy them. This T-cell awakening therapy is now being combined with other types of cancer treatment, such as targeted therapies that focus on driver mutations, but Hacohen and Wu have also used it in combination with personalized vaccines that focus on random mutations, in order to make the vaccines even more effective.
A third type of therapy involves re-engineering the immune system by deploying chimeric antigen receptors (CARs): synthesized molecules that redirect T-cells to specific targets. CAR-T therapy, developed at the University of Pennsylvania, has proven highly effective against leukemia, a blood cancer. Assistant professor of medicine Marcela Maus, recruited from Penn, a world-renowned expert in the use of CAR-T therapies who also conducts research as director of the cellular immunotherapy program at MGH, is working to develop such therapies to kill solid tumors.
CAR-T cells are engineered immune cells that recognize specific markers on the surface of cancer cells and attack them. The process involves removing T-cells from a patient, engineering them to target a particular type of cell, growing them in the lab, and then injecting billions of them into the patient. The upside of CAR-T therapies is the “unprecedented elimination of tumors in the majority of patients,” Hacohen explains, “with the downside of toxicity….You’re killing billions of cells in the body in weeks,” a response that dwarfs anything the immune system could stage unaided. This can lead to “cytokine storms,” as huge numbers of cancer cells die almost simultaneously and have to be flushed from patients. Experts in this technique have developed methods for controlling these storms, but the high cost of the approach—as much as $500,000 per patient—has made it the poster child for the troubling economics of modern cancer care (see “Is Precision Medicine for Everyone?”).

Outliers No More

Cost is just one constraint on the aim of ensuring that the best therapies reach the largest possible number of patients. Professor of medicine Deborah Schrag, chief of the division of population sciences at DFCI, makes a distinction between a therapy’s efficacy in a lab or controlled setting such as a clinical trial, and its effectiveness in the population at large. It’s the difference between how well a treatment can work and how well it actually does work given real world conditions. “If a dairy farmer from Maine can’t make it to twice daily radiation treatment in Boston because he has to milk his cows,” that changes the real-world effectiveness of the therapy. Participants in clinical trials are likely to take their medications twice a day exactly as prescribed, but in the routine care context, adherence is imperfect, and that contributes to the efficacy-effectiveness gap. (Key to tracking any intervention’s performance are electronic health records, and Schrag is among the leaders of a cancer data-science effort to develop standards for records used in cancer care; see “Toward a Personal Biomap.”) “Historians of medicine and some prominent skeptics look at the bottom line, and ask what is happening at the population level,” she explains. The reality is that for most patients, advanced lung cancer remains fatal. Leading-edge therapies such as targeted medicine have helped only a subset of the population. “Cancer medicine is the furthest ahead” in the use of genomic analysis to guide therapy, Schrag says, “but it still has a long way to go.”
But patients like Linnea Olson are no longer outliers. Alice Shaw, her oncologist, says Olson’s appearance on an ABC World News broadcast in 2009 made other lung-cancer patients realize that they ought to be genetically tested, too. One of those patients came to MGH, was treated by Shaw, and appeared on the same show the following year, and that led to another generation of patients realizing that they might have a treatable mutation, too. “Now they help each other,” she says. “This has allowed patients to gain access to therapies that they would never have known about otherwise, because even their doctors didn’t know about them. I have this whole tree of patients connected to each other through social media.” One MGH lung cancer patient recently climbed a peak above 20,000 feet in the Himalayas, and was featured in TheNew York Times. The comments from readers suggested that he must be “an outlier.” Not so, says Shaw: she has many patients who are performing incredible feats and living for years, now that targeted therapies are available. “These patients are not the rare outliers anymore.”
Olson is happy to have the company, but jokes that she needs to stay out front: “If I’m not, that means I’m dead,” she says, laughing. Now four years into her third targeted therapy without any apparent cancer progression, she has instead begun experiencing toxicity from the contrast agents used in the CT scans that are required every few weeks as part of clinical trials. “I figured out the other day that I have known I had cancer for 22.4 percent of my life,” and had more than 150 CT scans. “That is a huge amount. But it is very easy to put into perspective quickly. I am so lucky to have these problems, because I am alive.” Olson still allows CT scans of her lungs, to which her particular metastatic cancer is confined, but not of her abdomen. That means “I’m non-compliant” in the trial, she says. “But I’ve already donated my body to science, and I want to live. Nobody expected any patient like me to live this long.”  



This is the relationship between money and happiness 04-25

$
0
0

Can money buy you happiness? 


It’s a longstanding question that has many different answers, depending on who you ask.
Today’s chart approaches this fundamental question from a data-driven perspective, and it provides one potential solution: money does buy some happiness, but only to a limited extent.






Money and happiness

First, a thinking exercise.

Let’s say you have two hypothetical people: one of them is named Beff Jezos and he’s a billionaire, and the other is named Jill Smith and she has a more average net worth. Who do you think would be happiest if their wealth was instantly doubled?
Beff might be happy that he’s got more in the bank, but materially his life is unlikely to change much – after all, he’s a billionaire. On the flipside, Jill also has more in the bank and is likely able to use those additional resources to provide better opportunities for her family, get out of debt, or improve her work-life balance.
These resources translate to real changes for Jill, potentially increasing her level of satisfaction with life.
Just like these hypotheticals, the data tells a similar story when we look at countries.

The data-driven approach



World Bank

In general, this means that as a country’s wealth increases from $10k to $20k per person, it will likely slide up the happiness scale as well. For a double from $30k to $60k, the relationship still holds – but it tends to have far more variance. This variance is where things get interesting.

Outlier regions

Some of the most obvious outliers can be found in Latin America and the Middle East:
In Latin America, people self-report that they are more satisfied than the trend between money and happiness would predict.
Costa Rica stands out in particular here, with a GDP per capita of $15,400 and a 7.14 rating on the Cantril Ladder (which is a measure of happiness). Whether it’s the country’s rugged coastlines or the local culture that does the trick, Costa Rica has higher happiness ratings than the U.S., Belgium, or Germany – all countries with far higher levels of wealth.
In the Middle East, the situation is mostly reversed. Countries like Saudi Arabia, Qatar, Iran, Iraq, Yemen, Turkey, and the U.A.E. are all on the other side of the trend line.
Outlier countries
Within regions, there is even plenty of variance.
We just mentioned the Middle East as a place where the wealth-happiness continuum doesn’t seem to hold up as well as it does in other places in the world.
Interestingly, in Qatar, which is actually the wealthiest country in the world on a per capita basis ($127k), things are even more out of whack. Qatar only scores a 6.37 on the Cantril Ladder, making it a big exception even within the context of the already-outlying Middle East. 



Nearby Saudi Arabia, U.A.E., and Oman are all poorer than Qatar per capita, yet they are happier places. Oman rates a 6.85 on the satisfaction scale, with less than one-third the wealth per capita of Qatar.

There are other outlier jurisdictions on the list as well: Thailand, Uzbekistan, and Pakistan are all significantly happier than the trend line (or their regional location) would project. Meanwhile, places like Hong Kong, Ireland, Singapore, and Luxembourg are less happy than wealth would predict.






An AI that can predict cell structures 05-12

$
0
0


Fluorescent-labeled cells used to train neural networks. Image: Allen Institute. 


New 3D models of living human cells generated by machine-learning algorithms are allowing scientists to understand the structure and organization of a cell's components from simple microscope images.

Why it matters: The tool developed by the Allen Institute for Cell Science could be used to better understand how cancer and other diseases affect cells or how a cell develops and its structure changes — important information for regenerative medicine.

"Each cells has billions of molecules that, fortunately for us, are organized into dozens of structures and compartments that serve specialized functions that help cells operate," says Allen Institute's Graham Johnson, who helped develop the new model.

What they did: The researchers used gene editing to label the nucleus, mitochondria and other structures inside live human induced pluripotent stem cells (iPSC) with fluorescent tags and took tens of thousands of images of the cells.

They then used those images to train a type of neural network known as Generative Adversarial Networks (GANs). That yielded a model that can predict the most likely shape of the structures and where they are in cells based on just the cell's plasma membrane and nucleus.

Using a different algorithm, they created a model that can take an image of a cell that hasn't been fluorescent-labeled — in which it's difficult to distinguish the cell's components ("it looks like static on an old TV set," Graham Johnson says) — and find the structures.

What they found: When they compare the predicted image to actual labeled ones, the Allen Institute researchers said they are nearly indistinguishable.

The advance: Gene editing and fluorescent dyes often used to study cells only allow a few components to be visualized at once and can be toxic, limiting how long researchers can observe a cell.

Plus, "knowledge gained from more expensive techniques or ones that take a while to do and do well can be inexpensively applied to everyone’s data," says the Allen Institute's Greg Johnson, who also worked on the tool. "This provides an opportunity to democratize science."

View at the original source

This “Smart Drug” Could Hack Your Brain Chemistry to Increase Your Intelligence. 05-12

$
0
0


Qualia is a 42 ingredient 'smart drug' designed to provide users with immediate, noticeable uplift of their subjective experience within 20 minutes of taking it, as well as long-term benefits to their neurology and overall physiologic functioning.


The Science of Nootropics

Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. They’re a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics.

However, there are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers. Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team including Sara Adães, who has a PhD in neuroscience and Jon Wilkins, a Harvard PhD in biophysics.

Smart Drugs

Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.

The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?

For new medicines, turn to pioneers 05-21

$
0
0




Most transformative medicines originate in curiosity-driven science, evidence says....

Would we be wise to prioritize “shovel-ready” science over curiosity-driven, fundamental research programs? In the long term, would that set the stage for the discovery of more medicines?
To find solid answers to these questions, scientists at Harvard and the Novartis Institute for Biomedical Research (NIBR), publishing in Science Translational Medicine, looked deep into the discovery of drugs and showed that, in fact, fundamental research is “the best route to the generation of powerful new medicines.”

“The discoveries that lead to the creation of a new medicine do not usually originate in an experiment that sets out to make a drug. Rather, they have their origins in a study — or many studies — that seek to understand a biological or chemical process,” said Mark Fishman, one of three authors of the study. “And often many years pass, and much scientific evidence accumulates, before someone realizes that maybe this work holds relevance to a medical therapy. Only in hindsight does it seem obvious.”

Fishman is a professor in the Harvard Department of Stem Cell and Regenerative Biology, a faculty member of the Harvard Stem Cell Institute, and former president of NIBR. He is a consultant for Novartis and MPM Capital, and is on the board of directors of Semma Therapeutics and the scientific advisory board of Tenaya Therapeutics.

CRISPR-cas9 is a good example of discovery biology that opened new opportunities in therapeutics. It started as a study of how bacteria resist infection by viruses. Scientists figured out how the tools that bacteria use to cut the DNA of an invading virus could be used to edit the human genome, and possibly to target genetic diseases directly.

The origins of CRISPR-Cas9 were not utilitarian, but those discoveries have the potential to open a new field of genomic medicine.

Blood pressure medicines would never have been created without the discovery of the role of renin (a renal extract) in regulating blood pressure in 1898.

Blood pressure medication is another example of how fundamental discoveries can lead to transformative medicines.

People who suffer from high blood pressure often take drugs that act by blocking the angiotensin-converting enzyme. Those medicines would never have been created without the discovery of the role of renin (a renal extract) in regulating blood pressure in 1898, or without the discovery of angiotensin in 1939, or without the solid understanding of how the enzyme works, shown in 1956.

This work was not tied earlier to making pills for hypertension, mainly because hypertension was generally believed to be harmless until the 1950s, when studies showed its relationship to heart disease. Before then, the control of blood pressure was itself a fundamental science, beginning with Stephen Hales’ measurement  of blood pressure in a horse in 1733.

The discovery of ACE inhibitors really reflects the convergence of two fields of fundamental, curiosity-driven discovery.

Yet some observers believe that projects that can demonstrate up front that they could produce something useful should take priority over projects that explore fundamental questions. Would there be many more medicines if academics focused more on programs with practical outcomes? How would that shift affect people in the future?

To find answers, Fishman and his colleagues investigated the many scientific and historical paths that have led to new drugs. The study they produced is a contemporary look at the evidence linking basic research to new medicines.

The authors used a list of the 28 drugs defined by other scientists as the “most transformative” medicines in the United States between 1985 and 2009. The group examined:
Whether the drug’s discovery began with an observation about the roots of disease;
Whether the biologist believed that it would be relevant to making a new medicine; and
How long it took to realize that.

To mitigate bias, the researchers repeatedly corroborated the assignment with outside experts.
They found that eight out of 10 of the medicines on their list led back to a fundamental discovery — or series of discoveries — without a clear path to a new drug.

The average time from discovery to new drug approval was 30 years, the majority of which was usually spent in academia, before pharmaceutical or biotechnology companies started the relevant drug development programs.

Fishman concluded, “We cannot predict which fundamental discovery will lead to a new drug. But I would say, from this work and my experiences both as a drug discoverer and a fundamental scientist, that the foundation for the next wave of great drugs is being set today by scientists driven by curiosity about the workings of nature.”

What industry and academic leaders say..

Leaders in biomedicine from industry, business, and academia warmly welcome this new body of evidence, as it supports the case for funding curiosity-driven, non-directed, fundamental research into the workings of life.

“This perspective on drug discovery reminds all of us that while many in both industry and academia have been advocating for a more rational approach to R&D, the scientific substrate we depend on results from a less than orderly process. The impact of basic research and sound science is often unpredictable and underestimated. With several telling examples, the authors illustrate how they can have a ripple effect through our field.”

– Jean-François Formela, M.D., Partner, Atlas Venture...

“The paper presents a compelling argument for investing in fundamental, curiosity-driven science. If it often takes decades to recognize when a new discovery should prompt a search for targeted therapeutics, we should continue to incentivize academic scientists to follow their nose and not their wallets.”

– George Daley, M.D., Ph.D., Dean of the Faculty of Medicine, Caroline Shields Walker Professor of Medicine, and Professor of Biological Chemistry and Molecular Pharmacology at Harvard Medical School

“There is a famous story of a drunk looking for his lost keys under a streetlight because the light is better there. As Mark reminds us, if we only look for cures where the light has already shone, we will make few if any new discoveries. Basic research shines a light into the dark corners of our understanding, and by that light we can find wonderful new things.”

— Dr Laurie Glimcher, M.D., President and CEO of the Dana-Farber Cancer Institute and Richard and Susan Smith Professor of Medicine at Harvard Medical School

“The importance of fundamental discovery to advances in medicine has long been a central tenet of academic medicine, and it is wonderful to see that tenet supported by this historical analysis. For those of us committed to supporting this pipeline, it is a critical reminder that young scientists must be supported to pursue out-of-the-box questions and even new fields. In the end, that is one of the key social goods that a research university provides to future generations.”

— Katrina Armstrong, M.D., M.S.C.E., Physician-in-Chief, Department of Medicine, Massachusetts General Hospital

“Human genetics is powering important advances in translational medicine, opening new doors to treatments for both common and rare diseases at an increasingly rapid pace. Yet, these discoveries still require fundamental, basic scientific understanding into the drug targets’ mechanism of action. In this way, the potential of the science can be unlocked through a combination of curiosity, agility, and cross-functional collaboration to pursue novel therapeutic modalities like gene and cellular therapies, living biologics, and devices. This paper illustrates the value of following the science with an emphasis on practical outcomes and is highly relevant in today’s competitive biopharmaceutical environment, where much of the low-hanging fruit has already been harvested.”

– Andy Plump, M.D., Ph.D., Chief Medical and Scientific Officer, Takeda Pharmaceutical Co.
“Medicine depends on scientists asking questions, collectively and over generations, about how nature works. The evidence provided by Fishman and colleagues supports an already strong argument for continued and expanded funding of our nation’s primary source of fundamental science: the NIH and the NSF.”

– Douglas Melton, Ph.D., Xander University Professor at Harvard, Investigator of the Howard Hughes Medical Institute, and co-director of the Harvard Stem Cell Institute

“Just as we cannot translate a language we do not understand, translational medicine cannot exist without fundamental insights to be converted into effective therapies. In their excellent review, Fishman and his colleagues bring the factual evidence needed to enrich the current debate about the optimal use of public funding of biomedical research. The product of public research funding should be primarily fundamental knowledge. The product of industrial R&D should be primarily transformative products based on this knowledge.”

— Elias Zerhouni, M.D., President Global R&D Sanofi, former Director of the National Institutes of Health, 2002-2008

“Fundamental research is the driver of scientific knowledge. This paper demonstrates that fundamental research led to most of the transformative medicines approved by the FDA between 1985 and 2009. Because many genes and genetic pathways are evolutionarily conserved, discoveries made from studies of organisms that are highly tractable experimentally, such as yeasts, worms, and flies, have often led to and been integrated with findings from studies of more complex organisms to reveal the bases of human disease and identify novel therapeutic targets.”

– H. Robert Horvitz, Nobel Laureate; David H. Koch Professor, Member of the McGovern Institute for Brain Research and of the David H. Koch Institute for Integrative Cancer Research, and Howard Hughes Medical Institute Investigator at Massachusetts Institute of Technology

“This meticulous and important study of the origin of today’s most successful drugs finds convincingly that the path to discovery lies through untargeted fundamental research. The authors’ clear analysis is an effective counter to today’s restless investors, academic leaders, and philanthropists, whose impatience with academic discovery has itself become an impediment to the conquest of disease.”

— Marc Kirschner, John Franklin Enders University Professor, Department of Systems Biology, Harvard Medical School

“Some ask if there is a Return on Investment (ROI) in basic biomedical research. With transformative therapies as the ‘R,’ this work traces the path back to the starting ‘I,’ and repeatedly turns up untargeted academic discoveries — not infrequently, two or more that are unrelated to each other. Conclusion? A nation that wants the ‘R’ to keep coming must maintain, or better, step up the ‘I’: that is, funding for curiosity-driven, basic research.”

View at the original source

A Resolution Revolution,Single-cell Sequencing Techniques, 05-28

$
0
0







Despite its promise, a lack of spatial-temporal context is one of the challenges to making the most of single-cell analysis techniques. For example, information on the location of cells is particularly important when looking at how a common form of early-stage breast cancer, called ductal carcinoma in situ (DCIS) progresses to a more invasive form, called invasive ductal carcinoma (IDC). “Exactly how DCIS invasion occurs genomically remains poorly understood,” said Nicholas Navin, Ph.D., associate professor of Genetics at the University of Texas MD Anderson Cancer Center. Navin is a pioneer in the field, developing one of the first methods for scDNA-seq.

Cellular spatial data is critical for knowing whether tumor cells are DCIS or IDC. So, Navin developed topographical single-cell sequencing (TSCS). Navin and a team of researchers published their findings in February 2018 in Cell. “What we found was that, within the ducts, mutations had already occurred and had generated multiple clones and those clones migrated into the invasive areas,” Navin said.

Navin and his colleagues are also using single-cell techniques to study how triple-negative breast cancer, becomes resistant to the standard from of treatment for the disease, neo-adjuvant chemotherapy. In that work, published in an April 2018 online issue of Cell, using scDNA-seq and scRNAseq, Navin and his colleagues found responses to chemotherapy were pre-existing, thus adaptively selected. However, the expression of resistant genes was acquired by subsequent reprogramming as a result of chemotherapy. “Our data raise the possibility of therapeutic strategies to overcome chemoresistance by targeting pathways identified in this study,” Navin said.
Revealing Complexity.

The authors of research published in 2017 in Genome Biology also identified lineage tracing as one of the technologies that will “likely have wide-ranging applications in mapping developmental and disease-progression trajectories.” In March researchers published an online study in Nature in which they combined single-cell analysis with a lineage tracing technique, called GESTALT (genome editing of synthetic target arrays for lineage tracing), to define cell type and location in the juvenile zebrafish brain.

The combined technique, called scGESTALT, uses CRISPR-Cas9 to perform the lineage tracing and single-cell RNA sequencing to extract the lineage records. Cas9-induced mutations accumulate in a CRISPR barcode incorporated into an animal’s genome. These mutations are passed onto daughter cells and their progenies over several generations and can be read via sequencing. This information has allowed researchers to build lineage trees. Using single-cell analysis, the team could then determine the diversity of cell types and their lineage relationships. Collectively, this work provided a snapshot of how cells and cell types diverge in lineages as the brain develops. “Single-cell analysis is providing us with a lot of information about small differences at cell type-specific levels, information that is missed when looking at the tissue-wide level,” said Bushra Raj, Ph.D., a postdoctoral fellow in Alex Schier’s lab at Harvard University and first author on the paper.

Raj’s collaborators included University of Washington’s Jay Shendure, Ph.D., and Harvard Medical School’s Allon Klein, Ph.D., pioneers in the field of single-cell analysis. The team sequenced 60,000 cells from the entire zebrafish brain across multiple animals. The researchers identified more than 100 cell types in the juvenile brain, including several neuronal types and subtypes in distinct regions, and dozens of marker genes. “What was unknown was the genetic markers for many of these cell types,” Raj explained. “This work is a stepping stone,” she added. “It’s easy to see how we might one day compare normal gene–expression maps of the brain and other organs to help characterize changes that occur in congenital disease or cancer.”

Raj credits single-cell analysis with accelerating the field of developmental biology.
“People have always wanted to work at the level of the cell, but the technology was lacking,” she said. “Now that we have all of these sequenced genomes, and now that we have these tools that allow us to compartmentalize individual cells, this seems like the best time to challenge ourselves as researchers to understand the nitty-gritty details we weren’t able to assay before.”

A gold leaf paint and ink depiction of the Plasmodium falciparum lifecycle by Alex Cagan.
Human disease-relevant scRNA-seq is not just for vertebrates. For example, a team of researchers at the Wellcome Sanger Institute are working on developing a Malaria Cell Atlas. Their goal is to use single-cell technology to produce gene activity profiles of individual malaria parasites throughout their complex lifecycle. “The sequencing data we get allows us to understand how the parasites are using their genomes,” said Adam Reid, Ph.D., a senior staff scientist at the Sanger. In March 2018, the team published the first part of the atlas, detailing its results for the blood stage of the Plasmodium lifecycle in mammals. Reid contends these results will change the fight against malaria. “Malaria research is a well-funded and very active area of research. We’ve managed to get quite a bit of understanding of how the parasite works. What single-cell analysis is doing is allowing us to better understand the parasite in populations. We thought they were all doing the same thing. But, now we can see they are behaving differently.”

The ability to amplify very small amounts of RNA was the key innovation for malaria researchers. “When I started doing transcriptome analysis 10 years ago, we needed to use about 5 micrograms of RNA. Now, we can use 5 pico grams, 1 million times less,” Reid said. That innovation allows scientists like Reid to achieve unprecedented levels of resolution in their work. For Reid, increased resolution means there is hope that science will be able to reveal how malaria evades the immune system in humans and how the parasites develop resistance to drugs. Reid predicted the Atlas will serve as the underpinning for work by those developing malaria drugs and vaccines. “They will know where in the life cycle genes are used and where they are being expressed,” he said. Drug developers can then target those genes. The Atlas should be complete in the next two years, Reid added.
In the meantime, Reid and his colleagues are focused on moving their research from the lab to the field, particularly to Africa. “We want to look at these parasites in real people, in real settings, in real diseases states,” he explained. Having access to fresher samples is one reason to take the research into the field. “The closer we can get to the disease, the better chance we have of making an impact.” Reid anticipates that RNA-seq technology is on the verge of being portable enough to go into the field (see Preparing scRNA-seq for the Clinic & the Field). Everything from instrumentation to software is developing rapidly, he said. Reid also said that the methods used to understand the malaria parasite will likely be used to understand and create atlases for other disease vectors.

Path Ahead

It is clear to those using single-cell analysis in basic research that the path ahead includes using the techniques in the clinic. “As the technologies become more stable, there will be a lot of opportunities for clinical applications,” Navin said. These include early detection by sampling for cancer markers in urine, prostate fluid, and the like. It also includes non-invasive monitoring of rare circulating tumor cells, as well as personalizing treatment decisions using specific markers. These methods will be particularly useful in the case of samples that today would be labeled QNS, or ‘quantity not sufficient.’ “Even with QNS samples, these methods allow you to get high-quality datasets to guide treatment decisions.” 



Viewing all 1643 articles
Browse latest View live




Latest Images