Articles on this Page
- 02/04/18--18:48: _The Science of Maki...
- 02/10/18--06:16: _National Employabil...
- 02/11/18--17:45: _Canadian Prime Mini...
- 02/14/18--05:01: _Turning workers int...
- 02/15/18--19:08: _How to Get People A...
- 02/15/18--19:55: _Developing Novel Dr...
- 02/15/18--20:06: _Developing Novel Dr...
- 02/16/18--09:26: _Vishal Sikka: Why A...
- 03/05/18--18:28: _Behavioral science ...
- 03/06/18--06:32: _The Airbnb Effect: ...
- 04/11/18--18:41: _Manmohan Singh dona...
- 04/14/18--10:55: _Can the Minerva Mod...
- 04/21/18--19:53: _A Chink in Bacteria...
- 04/22/18--19:01: _Silicon Valley is g...
- 04/23/18--08:58: _The dawn of precisi...
- 04/24/18--20:05: _This is the relatio...
- 05/11/18--20:50: _An AI that can pred...
- 05/11/18--21:01: _This “Smart Drug” C...
- 05/20/18--19:49: _For new medicines, ...
- 05/27/18--21:10: _A Resolution Revolu...
- 02/04/18--18:48: The Science of Making Good Decisions 01-05
- 02/10/18--06:16: National Employability Report 02-10
- 02/14/18--05:01: Turning workers into 'super workers' with robotic suits 02-14
- 02/15/18--19:08: How to Get People Addicted to a Good Habit 02-16
- 02/15/18--19:55: Developing Novel Drugs 02-16
- 02/15/18--20:06: Developing Novel Drugs 2 02-16
- 02/16/18--09:26: Vishal Sikka: Why AI Needs a Broader, More Realistic Approach 02-16
- 04/14/18--10:55: Can the Minerva Model of Learning disrupt higher Education...04-14
- 04/21/18--19:53: A Chink in Bacteria's Armor 04-22
- 04/22/18--19:01: Silicon Valley is going back to an ancient technology: People 04-23
- 04/23/18--08:58: The dawn of precision medicine 04-23
- 04/24/18--20:05: This is the relationship between money and happiness 04-25
- 05/11/18--20:50: An AI that can predict cell structures 05-12
- 05/20/18--19:49: For new medicines, turn to pioneers 05-21
- 05/27/18--21:10: A Resolution Revolution,Single-cell Sequencing Techniques, 05-28
The world is full of disagreements. However, one thing that everyone can agree upon is that at some point in our lives we’ve all made a decision we regret. If someone tells you that they’ve never made a bad decision, most likely they’re either lying or have convinced themselves that their bad decision was good.
Life would be easier if we intrinsically knew how to make good decisions. We would find more success in our careers and personal lives. However, history is filled with bad decision-making. When doctors said they were “completely certain” about a patient’s diagnosis, 40% of the time they were wrong, a study found.
In 1975, the Eastman Kodak company held the majority share of the US film market. Kodak decided to hold off on sharing its development of the world’s first digital camera because they feared that it would destroy their film business. Then, in the 1980s, Kodak decided not to be the 1984 Olympics official film. Fuji received the honor, and from there became a major player in the US marketplace. In 2012, Kodak filed for bankruptcy.
It’s not that people aren’t capable of making good decisions, but that they don’t use the correct decision-making methodology. Having sound gut instincts is great, but that’s only a start. Taking the time to define the problem, recognize how emotions affect decisions, learn how to utilize emotions for better decision making, and know when enough is enough are all parts of making good decisions.
Define the Problem
It used to be difficult to find information. There wasn’t the Internet, a 24/7 news media cycle, cellphones, or social media. You couldn’t just go online to research databases from your couch to find what you needed.
In today’s world, there’s a glut of information. It’s easy to become lost in that information. It’s easy to make mistakes, because all that extra information gets in the way of you seeing the real issue.
The book, Blink: The Power of Thinking Without Thinking, shows how too much information can harm people. It makes the case that when doctors are attempting to diagnose patients, “extra information is more than useless. It’s harmful.” Doctors need to know the pertinent information for that patient, not every piece of data that exists on a particular diagnosis. Knowing too much information overcomplicates and confuses an issue.
When making good decisions, it’s not about knowing as much information as you can; it’s about knowing the right information. Knowing the correct information helps you to identify the root problem, and solve it.
A quality of a good CEO is the ability to weed through the massive amount of information in the world and identify the pertinent information in a timely manner. This ability takes objectivity. A CEO cannot be so swayed by his emotions, his opinions, and his beliefs that he ignores the facts. However, all CEOs are human, and since all humans experience emotions, all decisions humans make are to some extent affected by emotions.
Emotions are an intrinsic part of being human. This means that no matter how hard we try to keep subjectivity out of decision-making, we will not succeed. What we can do is learn how to recognize emotions and use them to our advantage.
One of the major pitfalls of emotions is that we aren’t all that great at controlling our gut reactions. We’ve all regretted doing something in the heat of the moment. Our rational self vanished and we did or said something that was detrimental. This immediate and overpowering emotional response to some stimulus is known as an amygdala hijack.
In the professional world, an amygdala hijack could destroy a person’s career.
Take a look at Eliot Spitzer. He was the governor of New York, until he resigned because of involvement in a prostitution ring. Spitzer had graduated from Princeton and Harvard law school. Before becoming mayor, he’d been an aggressive and successful corporate fraud and organized crime prosecutor. But his emotions—his need for pleasure—overwhelmed his rational thinking. He was hijacked by his amygdala—the emotional center of the brain—made poor decisions, and lost everything.
An amygdala hijack doesn’t just affect the person experiencing the uncontrolled gut reaction. Emotions are contagious. They spread from person to person. If the CEO of a company is hijacked, his emotional outburst may influence his co-workers and subordinates. The company’s ability to make good decisions may decrease, as collaboration among employees deteriorates.
A person’s ability to control his emotions is principal to making good decisions. However, focusing solely on logic—only paying attention to flow charts, market movement, risk versus reward, and statistical trends—can lead to horrendous flaws in logic and poor decision making.
The Great Depression resulted from a variety of factors, including the 1929 stock market crash, over 9,000 banks failing in the 1930s, a vast reduction in product purchasing, high import taxes for foreign countries, and the 1930 Mississippi Valley drought that left the area with the nickname, “The Dust Bowl.”
Similarly, the rapid decline of household consumption and housing construction contributed to the Great Recession. In the 1980s, a massive, unsustainable boom in consumer spending occurred, while income growth largely decreased. People borrowed more and more money, indebting themselves through novel mortgage lending plans, until in 2006, when interest rates rose, refinancing opportunities fell, lender credit dried up, and homeowners defaulted on their mortgages. Consumption seeking stalled and the American society greatly cut back spending. The result was a sharp decline in demand, and the Great Recession.
With both the Great Depression and Great Recession, the aftermaths led to realizations about widespread denial, greed, and lack of awareness. People, whether a bank or consumer, were busy wanting more, and therefore were blinded to the long-term implications of their theoretically logical solutions and to unrealized underlying emotions for making and having more. By understanding and embracing emotion, you can make better decisions.
Remaining calm is essential for good decision making. Once you realize what emotions are wrapped up in the problem you’re facing, you cannot let them overpower a well thought-out decision.
In 2009, over 61,000 tattoos were removed in the United States. Many of those tattoos had been the result of an emotional reaction. People didn’t think about whether or not they should get a tattoo. They let their short-term emotions take the reigns and lost the ability to think clearly, instead of using their emotions to bolster their logical reasoning.
Some of the most successful decision makers are samurais and Special Forces. Why? Because they know how to stay calm and in control.
Samurais train as much mentally as they do physically because they believe that people should exhibit calmness both in battle and everyday life. Situations should be approached with awareness and alertness, as well as an unbiased attitude.
Special Forces are extremely selective. They look for individuals who are not only skilled and determined, but who are also emotionally stable. Navy Seals use four techniques to increase recruits’ chances of passing their program and of making good decisions in the field:
Self-awareness is “having a clear perception of your personality, including strengths, weaknesses, thoughts, beliefs, motivation, and emotions.” Without self-awareness, you cannot empathize and without empathy, you cannot understand other people. Without understanding other people, you can’t identify what motivates them, how they’ll respond, and what opportunities exist. Without these things, you cannot make good decisions.
One very important emotion is empathy. Empathy is “the ability to sense other people’s emotions, coupled with the ability to imagine what someone else might be thinking or feeling.” To empathize, you must know yourself. You must possess self-awareness.
As the Harvard Business Review stated, “Executives who fail to develop self-awareness risk falling into an emotionally deadening routine that threatens their true selves. Indeed a reluctance to explore your inner landscape not only weakens your own motivation but can also corrode your ability to inspire others.”
Nobel Prize winner Daniel Kahneman developed a theory explaining why people don’t make 100% rational economic decisions. Kahneman stated that there are two overarching thought processes:
“System 1” is the “automatic, intuitive mind.” It carries out the majority of people’s everyday activities. It’s the “going with your gut,” and what you perceive to be correct. System 1 is efficient, but not always accurate.
“System 2” is the “controlled, deliberative, analytical mind.” It performs the functions that System 1 cannot. System 2 requires a great deal of focus, but can sometimes make up for System 1’s inaccuracy.
While System 1 doesn’t involve actively thinking about what you’re doing—you already know how to walk and make toast, so you don’t have to deliberately focus on those activities—System 2 requires you to produce certain thoughts that enable you to perform a particular function.
However, System 1 causes people to form snap judgments about others, because System 1 contains emotions and biases. When you meet someone, you form an opinion of them within the first few seconds. Say you shake someone’s hand and the handshake is strong, so you believe that the person you’re meeting is confident. This opinion may be correct, but oftentimes it isn’t. You need more information before forming an accurate opinion. This is where System 2 comes in. System 2 forces you to step back, slow down, and analyze the situation, so that you’re less likely to make a rash decision.
Follow this link to take a quiz from Kahneman’s book, Thinking, Fast and Slow, to see if you’re immune to logical inaccuracies: Thinking, fast or slow: can you do it?
Know When Enough Is Enough
After identifying the problem, recognizing emotions, and utilizing emotions, you have to know when to make a decision. In life, instances where decisions don’t need to be made are few and far between, and when a decision must be determined, there’s usually a timeline attached.
Striving to make the perfect decision will lead to stress and the feeling of being overwhelmed, and if you’re overwhelmed, your ability to make good decisions decreases drastically.
As James Waters, once The White House’s Deputy Director of Scheduling, stated: “Being able to make decisions when you know you have imperfect data is so critical.”
Waters was instructed that “A good decision now is better than a perfect decision in two days.” It’s a lesson he’s carried with him, and one that many analysts can’t believe.
In the business world, analysts play a huge role. Their job is to collect and analyze as much data as possible to produce the best result for a particular company. However, Waters encourages “people to make a decision with imperfect information.” He believes that the ability to make decisions with incomplete data is “really important for leaders to incorporate. It’s something that the White House has to do all the time. It’s great to analyze things but at some stage you’re just spinning your wheels.”90% of all information transmitted to our brains is visual.
View at the original source
The key findings of the present study are as follows:
No significant improvement in employability in the last four years We did the previous large scale study of employability of engineers in 2014. We had found that only 18.43% of engineers were employable for the software services sector, 3.21% for software products and 39.84% for a non-functional role such as Business Process Outsourcing.
Unfortunately, we see no massive progress in these numbers. These numbers as of today stand at: 17.91%, 3.67% and 40.57% respectively for IT Services, IT Products and Business Process Outsourcing. This is despite the fact that the number of engineering seats have not increased in the past year. We are not inferring that all initiatives for employability improvement have failed and there may be pockets of excellence present. However, the need of the hour is to find these pockets and scale them up to make an exponential impact on employability. This is crucial for India to continue its growth story and achieve the PM's vision of India becoming the human resource provider for the whole world.
Only 3.84% folks employable for startup software engineering jobs Investments and growth of technology startups is the new business story in India. Ratan Tata recently said that India is becoming the Silicon Valley of the 1990s. To sustain this growth, we need candidates with higher technology caliber, understanding of new products and requirements and the attitude to work in a startup. With this in mind, we specifically captured employability for startup technology roles this time. Unfortunately, we find that only 3.84% of engineers qualify for a startup technology role. This is a big concern and would surely hamper the growth of startups in India. It may also cause the market to be diluted with a lot of low quality products floating around.
More aspiration to work for startups Last year, we had found 6% students were interested to work for a startup. This year it is up by 33% to 8%. Students from tier 1 colleges are most motivated to work in startups as compared to others. It is also observed that inclination of males is strikingly high to work with startups than that of females. Among all of these, more students as compared to last year are interested to work for startups. While this is good news, there is a still a long way to go as only a handful of candidates (8%) are interested to work for startups.
Higher salary aspiration and higher salary for same skill This year on, we find that students have higher salary aspirations. Last year the median salary aspiration was INR 310 thousand, which is now INR 340 thousand implying that the market is also paying higher salaries. The median salary for the same skill was INR 282 thousand last year, which is INR 313 thousand this year. This means that talent is getting expensive and we believe this is due to the huge demand of manpower in technology sector and lack of supply. However, it is important to note that this supply is artificially low: more than 25% of employable candidates are beyond the top 750 engineering colleges. This pool of candidates is missed out by companies and to make sure that the war for talent doesn't lead to salaries going out of control, we need to find ways of better meritocratic matching of students with jobs.
View the entire report here
A version of this post originally appeared on LinkedIn. All information in this post is publicly accessible and does not use any private or confidential data. Opinions and views expressed are Ms. Urbanski's and do not necessarily represent LinkedIn Corporation.
Recently, Canada's Prime Minister, Justin Trudeau surpassed 2 million followers on LinkedIn — a year after becoming an official Influencer and two months after being named as one of LinkedIn’s Top Voices in 2017. The engagement he’s garnered with LinkedIn’s members proves that content related to the economy, government news/info, and politics all has a place on our platform, in our newsfeed and with our members.
But as savvy as Trudeau’s social team was, they were new to using LinkedIn and experienced the same learning curve that any of us do when we first start using a new platform. This post shares what I have learned working with Trudeau’s team over the past year and how you can make the most out of using LinkedIn’s platform.
We saw the most success during times that Trudeau was posting every few days. Having a couple weeks pass in between each update limits momentum and your ability to stay top-of-mind with your network. Social media channels have algorithms that reward high engagement so you always want your voice to be circulating throughout your network through a mix of posts, comments, questions, likes, and shares.
Tag people or companies that you mention
We saw the biggest spike in followers and engagement when Trudeau promoted his meeting with Microsoft CEO Satya Nadella in May 2017 where he tagged both him and Brad Smith in the post. We saw the same affect on his most recent post about meeting with Satya at the World Economic Forum. This tagging helps members discover who other people are and notifies the member that you mentioned them, thus increasing the chance that they will engage with it and be seen by their network (as was the case when both LinkedIn CEO Jeff Weiner and Satya Nadella liked and commented on Trudeau's posts and thus being exposed to their 7 million and 3 million followers, respectively).
Trudeau’s team loves to use relevant hashtags to promote their work on important issues or associate themselves with world events. Using hashtags makes your content discoverable and helps you to join popular conversations. Some great examples are #WEF18, #GEW2017, #GoNorth17 and #CPTPP.
Take the time to find strong images
Posting rich media and photographs perform better than generic images. In the early days of posting on LinkedIn, Trudeau would shares links from his official press releases which pull in a generic image of Canada’s coat of arms. You can see a significant difference in engagement with these posts versus the high quality images of him meeting with world leaders, speaking at an event or showcasing the cities he visits across Canada.
Use LinkedIn’s native video feature
People love video. They love to see who you truly are, what you’re doing and how you sound. It makes them feel like they are part of the moment or getting a sneak peak behind-the-scenes. Prime Minister Trudeau was officially the first world leader to use LinkedIn’s native video feature which was released in May 2017 and the engagement was off the charts. In his video he spoke directly to his LinkedIn followers — thanking them for their engagement and asking what they want to hear more of. The team then used the feedback they received to guide some of the content they posted moving forward — related to the comments left by members and the topics they are most interested in.
Posting content on a professional platform like LinkedIn doesn't mean you have to be a robot. We see a huge difference in interest and engagement when Trudeau posts with an authentic voice — using our native video feature or showcasing his meetings across Canada and the world — versus scripted long-form posts like his official PMO Press Releases. For most members, you don't have the same intense public pressure weighing on your shoulders like our PM, so you have an even greater opportunity to showcase yourself as an individual. I know this part can sometimes be scary but don't worry, your network will do a great job of supporting you and making you feel good. Just try and see!
Customize your URL
Ever noticed the weird mix of numbers and letters following your name in your LinkedIn URL? You can easily edit this on the right-hand rail of your LinkedIn profile under the "Contact and Personal Info" section. Creating a custom URL allows your content to easily be found in online search tools and can be included in your email signature, on business cards or other resources to increase traffic to your LinkedIn page.
But, let's take things to the next level!
The success Trudeau and his team has seen in using LinkedIn has been amazing! The climb to more than 2 million followers has been fun to watch. But there are a few things that Prime Minister Trudeau could be doing more of (and maybe you can too!)
Engagement with other people’s content
Engaging with other people's content by liking, sharing or commenting helps to expand the conversation and the relationships you have with your network. It also helps to grow your professional brand and increase your followers. Engagement actions help to bring an idea or topic to life and that's when the real energy of networking happens. You'd be amazed at how many new contacts I’ve made on LinkedIn by commenting on their posts or sharing their content.
Add all your business contacts
As you can tell through his updates, our Prime Minister has a very busy travel schedule where he meets with tons of world and business leaders. After these meetings, he should use LinkedIn to connect with these contacts and easily maintain their working relationship. Adding these connections to your network allows you to communicate news and updates at scale – which is a great efficiency tip for those who are as busy as the PM! (and even those who aren’t).
Follow other thought leaders
While Trudeau is one of the most followed world leaders on LinkedIn, there are many others who are posting great content related to some of today’s most important issues. Trudeau has a great opportunity to learn from what other thoughts leaders are doing, the type of content they’re posting, and the engagement results they see on various topics. Some of my favourite government thought-leaders include Australian Prime Minister, Malcom Turnbull; President of the French Republic, Emmanuel Macron; CIO for the Government of Canada, Alex Benay; Canada's Minister of Innovation, Science and Economic Development, Navdeep Bains; India's Prime Minister, Narendra Modi; Ontario Premier, Kathleen Wynne; and Mississauga's Mayor, Bonnie Crombie.
As a Canadian, I am grateful to have a Prime Minister who uses modern techniques to connect with people across our country and openly share news and information. I believe that we can all benefit in using a similar strategy to be better professionals, more effective leaders, advance our careers, achieve our goals, and collaborate with our peers.
The researchers were also mindful about which type of soap to use in the dispensers. Through pilot tests, they found that people preferred foam, for example. “They didn’t feel as clean when the soap wasn’t foamy,” Hussam says.
Importantly, the effects continued even after the households stopped receiving tickets and monitoring reports, suggesting that handwashing with soap was indeed a developable habit.
More importantly, the experiment resulted in healthier children in households that received a soap dispenser, with a 20 percent decrease in acute respiratory infections and a 30 to 40 percent decrease in loose stools on any given day, compared with children whose households did not have soap dispensers. Moreover, the children with soap dispensers ended up weighing more and even growing taller. “For an intervention of only eight months, that really surprised us,” Hussam says.
“Our results are consistent with the key predictions of the rational addiction model, expanding its relevance to settings beyond what are usually considered ‘addictive’ behaviors,” the researchers write.
In the incentives group, the promise of triple tickets didn’t affect behavior much, but, as Hussam notes, that may have been because the single tickets were already enough to get the children their most coveted prize: a school backpack. “Basically, we found that getting one ticket versus getting no tickets had huge effects, while going from one to three did little,” she says.
“Wherever we go, habits define much of what we do”
But in the monitoring group, handwashing rates increased significantly and immediately, not only for those who were monitored but also for those who were simply told to anticipate that their behavior would be tracked at a later date. “Simply knowing that handwashing will be more valuable in the future (because your behavior will be tracked so there’s a higher cost to shirking) makes people wash more today,” Hussam says.
This, Hussam hopes, is the primary takeaway of the study. While the experiment focused on a specific behavior in/among a specific area of India, the findings may prove valuable to anyone who is trying to develop a healthy addiction—whether it be an addiction to treating contaminated drinking water and using mosquito nets in the developing world, or an addiction to exercising every day and flossing every night in the developed world.
We analyze ﬁrms’ decisions to invest in incremental and radical innovation, focusing speciﬁcally on pharmaceutical research. We develop a new measure of drug novelty that is based on the chemical similarity between new drug candidates and existing drugs. We show that drug candidates that we identify as ex-ante novel are riskier investments, in the sense that they are subsequently less likely to be approved by the FDA.
However, conditional on approval, novel candidates are, on average, more valuable—they are more clinically eﬀective; have higher patent citations; lead to more revenue and to higher stock market value. Using variation in the expansion of Medicare prescription drug coverage, we show that ﬁrms respond to a plausibly exogenous cash ﬂow shock by developing more molecularly novel drug compounds, as opposed to more so-called “me-too” drugs. This pattern suggests that, on the margin, ﬁrms perceive novel drugs to be more valuable ex-ante investments, but that ﬁnancial frictions may hinder their willingness to invest in these riskier candidates. Over the past 40 years, the greatest gains in life expectancy in developed countries have come from the development of new therapies to treat conditions such as heart disease, cancer, and vascular disease.
At the same time, the development of new–and often incremental–drug therapies has played a large role in driving up health care costs, with critics frequently questioning the true innovativeness of expensive new treatments (Naci, Carter, and Mossialos, 2015). This paper contributes to our understanding of drug investment decisions by developing a measure of drug novelty and subsequently exploring the economic tradeoﬀs involved in the decision to develop novel drugs.
Measuring the amount of innovation in the pharmaceutical industry is challenging. Indeed, critics argue that “pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures,” making it diﬃcult to use simple drug counts as a measure of innovation (Light and Lexchin, 2012). To overcome this challenge, we construct a new measure of drug novelty for small molecule drugs, which is based on the molecular similarity of the drug with prior drug candidates.3 Thus, our ﬁrst contribution is to develop a new measure of pharmaceutical innovation.
We deﬁne a novel drug candidate as one that is molecularly distinct from previously tested candidates. Speciﬁcally, we build upon research in modern pharmaceutical chemistry to compute a pair-wise chemical distance (similarity) between a given drug candidate and any prior candidates in our data. This similarity metric is known as a “Tanimoto score” or “Jaccard coeﬃcient,” and captures the extent to which two molecules share common chemical substructures. We aggregate these pairwise distance scores to identify the maximum similarity of a new drug candidate to all prior candidates. Drugs that are suﬃciently diﬀerent to their closest counterparts are novel according to our measure. Since our metric is based on molecular properties observed at the time of a drug candidate’s initial development, it improves upon existing novelty measures by not conﬂating ex-ante measures of novelty with ex-post measures of success such as receiving priority FDA review.
In the United States, the sharpest decline in death rates from the period 1981 to 2001 come from the reduction in the incidence of heart disease. See Life Tables for the United States Social Security Area 1900-2100. https://www.ssa.gov/oact/NOTES/as120/LifeTables_Body.html See also Lichtenberg (2013), which estimates explicit mortality improvements associated with pharmaceuticals. One of the more vocal critics is Marcia Angell, a former editor of the New England Journal of Medicine. She argues that pharmaceutical ﬁrms increasingly concentrate their research on variations of top-selling drugs already on the market, sometimes called “me-too” drugs.
She concludes: “There is very little innovative research in the modern pharmaceutical industry, despite its claims to the contrary.” http://bostonreview. net/angell-big-pharma-bad-medicine. Indeed, empirical evidence appears to be consistent with this view; Naci et al. (2015) survey a variety of studies that show a declining clinical beneﬁt of new drugs. Small molecule drugs, synthesized using chemical methods, constitute over 80% of modern drug candidates (Ralf Otto, Alberto Santagostino, and Ulf Schrader, 2014). We will discuss larger drugs based on biological products in Section 3.6.
Our novelty measure based on molecular similarity has sensible properties. Pairs of drug candidates classiﬁed as more similar are more likely to perform the same function—that is, they share the same indication (disease) or target-action (mechanism). Further, drugs we classify as more novel are more likely to be the ﬁrst therapy of its kind. In terms of secular trends, our novelty measure indicates a decline in the innovativeness of small molecule drugs: both the number, as well as the proportion, of novel drug candidates has declined over the 1999 to 2014 period. Across our sample of drug candidates, over 15% of newly developed candidates have a similarity score of over 0.8, meaning that they share more than 80% of their chemical substructures with a previously developed drug.
We next examine the economic characteristics of novel drugs, in order to better understand the tradeoﬀs that ﬁrms face when deciding how to allocate their R&D resources. We begin by exploring how the novelty of a drug candidate relates to its (private and social) return from an investment standpoint. Since measuring a drug’s value is challenging, we rely on several metrics. First, we examine drug eﬀectiveness as measured by the French healthcare system’s assessments of clinical value-added, following Kyle and Williams (2017).
Since this measure is only available for a subset of approved drugs, we also examine the relationship between molecular novelty and the number of citations to a drug’s underlying patents, which the innovation literature has long argued is related to estimates of economic and scientiﬁc value (see, e.g. Hall, Jaﬀe, and Trajtenberg, 2005). We also use drug revenues as a more direct proxy for economic value. However, since mark-ups may vary systematically between novel and “me-too” drugs—that is, drugs that are extremely similar to existing drugs—we also rely on estimates of their contribution to ﬁrm stock market values. Speciﬁcally, we follow Kogan, Papanikolaou, Seru, and Stoﬀman (2017) and examine the relationship between a drug’s molecular novelty and the change its ﬁrm’s market valuation following either FDA approval or the granting of its key underlying patents.
Conditional on being approved by the FDA, novel drugs are on average more valuable. Speciﬁcally, relative to drugs entering development in the same quarter that treat the same disease (indication), a one-standard deviation increase in our measure of novelty is associated with a 33 percent increase in the likelihood that a drug is classiﬁed as “highly important” by the French healthcare system; a 10 to 33 percent increase in the number of citations for associated patents; a 15 to 35 percent increase in drug revenues; and a 2 to 8 percent increase in ﬁrm valuations. 4To benchmark what this means, we note that the chemical structures for Mevacor and Zocor, depicted in Figure 1, share an 82% overlap.
However, novel drugs are also riskier investments, in that they are less likely to receive regulatory approval. Relative to comparable drugs, a one-standard deviation increase in novelty is associated with a 29 percent decrease in the likelihood that it is approved by the FDA. Thus, novel drugs are less likely to be approved by the FDA, but conditional on approval, they are on average more valuable.
To assess how ﬁrms view this tradeoﬀ between risk and reward at the margin, we next examine how they respond to a positive shock to their (current or expected future) cashﬂows. Speciﬁcally, if ﬁrms that experience a cashﬂow shock develop more novel—rather than molecularly derivative—drugs, then this pattern would suggest that ﬁrms value novelty more on the margin.
Here, we note that we are implicitly assuming that treated ﬁrms have a similar set of drug development opportunities as control ﬁrms, and, moreover, that ﬁnancial frictions limit ﬁrms’ ability to develop new drug candidates. Indeed, if ﬁrms face no ﬁnancing frictions, then, holding investment opportunities constant, cashﬂow shocks should not impact their development decisions. However, both theory and existing empirical evidence suggest that a ﬁrm’s cost of internal capital can be lower than its cost of external funds.5 In this case, an increase in cashﬂows may lead ﬁrms to develop more or diﬀerent drugs by increasing the amount of internal funds that can be used towards drug development decisions. Even if this increase in cashﬂows occurs with some delay, ﬁrms might choose to respond today, either because it increases the ﬁrm’s net worth, and hence its eﬀective risk aversion (see, e.g. Froot, Scharfstein, and Stein, 1993), or because this anticipated increase in proﬁtability relaxes constraints today.
We construct shocks to expected ﬁrm cashﬂows using the introduction of Medicare Part D, which expanded US prescription drug coverage for the elderly. This policy change diﬀerentially increased proﬁts for ﬁrms with more drugs that target conditions common among the elderly (Friedman, 2009). However, variation in the share of elderly customers alone does not necessarily enable us to identify the impact of increased cashﬂows. This is because the expansion of Medicare impacts not only the proﬁtability of the ﬁrm’s existing assets.
For a theoretical argument, see Myers and Majluf (1984). Consistent with theory, several studies have documented that ﬁnancing frictions play a role in ﬁrm investment and hiring decisions. Recent work on this topic examines the response of physical investment (for instance, Lin and Paravisini, 2013; Almeida, Campello, Laranjeira, and Weisbenner, 2011; Frydman, Hilt, and Zhou, 2015); employment decisions (Benmelech, Bergman, and Seru, 2011; Chodorow-Reich, 2014; Duygan-Bump, Levkov, and Montoriol-Garriga, 2015; Benmelech, Frydman, and Papanikolaou, 2017); and investments in R&D (see e.g. Bond, Harhoﬀ, and van Reenen, 2005; Brown, Fazzari, and Petersen, 2009; Hall and Lerner, 2010; Nanda and Nicholas, 2014; Kerr and Nanda, 2015). These frictions may be particularly severe in the case of R&D: Howell (2017) shows that even relatively modest subsidies to R&D can have a dramatic impact on ex-post outcomes.
Contd on page 2....
To isolate the causal impact of cash ﬂows on development decisions, we exploit a second source of variation: remaining drug exclusivity (patent life plus additional exclusivity granted by the FDA). Even among ﬁrms with the same focus on the elderly, those with more time to enjoy monopoly rights on their products are likely to generate greater proﬁts.
With these two dimensions of variation—elderly share and remaining exclusivity–we can better control for confounders arising from both individual dimensions. For example, ﬁrms with more existing drugs for the elderly may diﬀerentially see a greater increase in investment opportunities as a result of Part D, even absent any changes to cash ﬂow.
Meanwhile, ﬁrms with longer remaining exclusivity periods on their products may have diﬀerent development strategies than ﬁrms whose drugs face imminent competition, again, even absent changes to cash ﬂows. Our strategy thus compares ﬁrms with the same share of drugs sold to the elderly and the same remaining exclusivity periods across their overall drug portfolio, but that diﬀer in how their remaining patent exclusivity is distributed across drugs of varying elder shares. This strategy allows us to identify diﬀerences in expected cash ﬂow among ﬁrms with similar investment opportunities, and at similar points in their overall product lifecycle.
We ﬁnd that treated ﬁrms develop more new drug candidates. Importantly, this eﬀect is driven by an increase in the number of chemically novel candidates, as opposed to “me-too” candidates. Further, these new candidates are aimed at a variety of conditions, not simply ones with a high share of elderly patients, implying that our identiﬁcation strategy is at least partially successful in isolating a shock to cash ﬂows, and not simply picking up an increase in investment opportunities for high elderly share drugs.
In addition, we ﬁnd some evidence that ﬁrm managers have a preference for diversiﬁcation. The marginal drug candidates that treated ﬁrms pursue often include drugs that focus on diﬀerent diseases, or operate using a diﬀerent mechanism (target), relative to the drugs that the ﬁrm has previously developed. These ﬁndings suggest that ﬁrms use marginal increases in cash to diversify their portfolios and undertake more exploratory development strategies, a fact consistent with models of investment with ﬁnancial frictions (Froot et al., 1993), or poorly diversiﬁed managers (Smith and Stulz, 1985).
Finally, our point estimates imply sensible returns to R&D. A one standard deviation increase in Part D exposure leads to an 11 percent increase in subsequent drug development, relative to less exposed ﬁrms. For the subset of ﬁrms for which we are able to identify cash ﬂow, this translates into an elasticity of the number of drug candidate with respect of R& expenditure of about 0.75.
We obtain a higher elasticity for the most novel drugs (1.01 to 1.59) and a lower elasticity for the most similar drugs (0.02 to 0.31). For comparison, estimates of the elasticity of output with respect to demand (or cash ﬂow) shocks in the innovation literature range from 0.3 to 4 (Henderson and Cockburn, 1996; Acemoglu and Linn, 2004; Azoulay, Graﬀ-Zivin, Li, and Sampat, 2016; Blume-Kohout and Sood, 2013; Dranove, Garthwaite, and Hermosilla, 2014).
Our results suggest that ﬁnancial frictions likely play a role in limiting the development of novel drug candidates. The ability to observe the returns associated with individual projects is an important advantage of our setting that allows us to make a distinct contribution to the literature studying the impact of ﬁnancial frictions on ﬁrm investment decisions. Existing studies typically observe the response of investment (or hiring) aggregated at the level of individual ﬁrms or geographic locations.
By contrast, our setting allows us to observe the risk and return of the marginal project being undertaken as a result of relaxing ﬁnancial constraints, and hence allows us to infer the type of investments that may be more susceptible to ﬁnancing frictions. We ﬁnd that, relaxing ﬁnancing constraints leads to more innovation, both at the extensive margin (i.e., more drug candidates) but also at the intensive margin (i.e., more novel drugs). Given that novel drugs are less likely to be approved by the FDA, the ﬁndings in our paper echo those in Metrick and Nicholson (2009), who document that ﬁrms that score higher in terms of a Kaplan-Zingles index of ﬁnancial constraints are more likely to develop drugs that pass FDA approval.
By providing a new measure of novelty, our work contributes to the literature focusing on the measurement and determinants of innovation. Our novelty measure is based on the notion of chemical similarity (Johnson and Maggiora, 1990), which is widely used in the process of pharmaceutical discovery.
Chemists use molecular similarity calculations to help them search chemical space, build libraries for drug screening (Wawer, Li, Gustafsdottir, Ljosa, Bodycombe, Marton, Sokolnicki, Bray, Kemp, Winchester, Taylor, Grant, Hon, Duvall, Wilson, Bittker, Danˇc´ık, Narayan, Subramanian, Winckler, Golub, Carpenter, Shamji, Schreiber, and Clemons, 2014), quantify the “drug-like” properties of a compound (Bickerton, Paolini, Besnard, Muresan, and Hopkins, 2012), and expand medicinal chemistry techniques (Maggiora, Vogt, Stumpfe, and Bajorath, 2014). In parallel work, Pye, Bertin, Lokey, Gerwick, and Linington (2017) use chemical similarity measures to measure novelty and productivity in the discovery of natural products.
Our measure of innovation is based on ex-ante information—the similarity of a drug’s molecular structure to prior drugs—and therefore avoids some of the truncation issues associated with patent citations (Hall et al., 2005). Further, since our measure is based only on ex-ante data, it does not conﬂate the ex-ante novelty of an idea with measures of ex-post success or of market size. By contrast, existing work typically measures “major” innovations using metrics based on ex-post successful outcomes, which may also be related to market size.
Examples include whether a drug candidate gets FDA Priority Review status (Dranove et al., 2014), or whether a drug has highly-cited patents (Henderson and Cockburn, 1996). A potential concern with these types of measures is that a ﬁrm will be credited with pursing novel drug candidates only if these candidates succeed and not when—as is true in the vast majority of cases—they fail. Similarly, outcomes such as whether a drug is ﬁrst in class or is an FDA orphan drug (Dranove et al., 2014; DiMasi and Faden, 2011; Lanthier, Miller, Nardinelli, and Woodcock, 2013; DiMasi and Paquette, 2004) may conﬂate market size with novelty and may fail to measure novelty of candidates within a particular class.
For example, it is easier to be the ﬁrst candidate to treat a rare condition than a common condition because fewer ﬁrms have incentives to develop treatments for the former. Further, measuring novelty as ﬁrst in class will label all subsequent treatments in an area as incremental, even if they are indeed novel.
Our paper also relates to work that examines how regulatory policies and market conditions distort the direction of drug development eﬀorts (Budish, Roin, and Williams, 2015); and how changes in market demand aﬀect innovation in the pharmaceutical sector (Acemoglu and Linn, 2004; Blume-Kohout and Sood, 2013; Dranove et al., 2014). Similar to us, Blume-Kohout and Sood (2013) and Dranove et al. (2014) exploit the passage of Medicare Part D, and ﬁnd more innovation in markets that receive a greater demand shock (drugs targeted to the elderly).
Even though we use the same policy shock, our work additionally exploits diﬀerences in drug exclusivity for speciﬁc drugs to identify the eﬀect of cash ﬂow shocks separately from changes in product demand that may increase ﬁrm investment opportunities. Indeed, we ﬁnd that treated ﬁrms invest in new drugs across diﬀerent categories—as opposed to those that only target the elderly—strongly suggesting that our identiﬁcation strategy eﬀectively isolates cash ﬂow shocks from improvements in investment opportunities.
Last, our measure of novelty can help shed light on several debates in the innovation literature. For instance, Jones (2010); Bloom, Jones, Reenen, and Webb (2017) argue for the presence of decreasing returns to innovation. Consistent with this view, we ﬁnd that drug novelty has decreased over time. An important caveat is that our novelty measure cannot be computed for biologics, which represent a vibrant research area.
Back to page 1...
View the complete research paper at the origin source
The concept of artificial intelligence (AI), or the ability of machines to perform tasks that typically require human-like understanding, has been around for more than 60 years. But the buzz around AI now is louder and shriller than ever. With the computing power of machines increasing exponentially and staggering amounts of data available, AI seems to be on the brink of revolutionizing various industries and, indeed, the way we lead our lives.
Vishal Sikka until last summer was the CEO of Infosys, an Indian information technology services firm, and before that a member of the executive board at SAP, a German software firm, where he led all products and drove innovation for the firm. India Today magazine named him among the top 50 most powerful Indians in 2017. Sikka is now working on his next venture exploring the breakthroughs that AI can bring and ways in which AI can help elevate humanity.
Sikka says he is passionate about building technology that amplifies human potential. He expects that the current wave of AI will “produce a tremendous number of applications and have a huge impact.” He also believes that this “hype cycle will die” and “make way for a more thoughtful, broader approach.”
In a conversation with Knowledge@Wharton, Sikka, who describes himself as a “lifelong student of AI,” discusses the current hype around AI, the bottlenecks it faces, and other nuances.
Knowledge@Wharton: Artificial intelligence (AI) has been around for more than 60 years. Why has interest in the field picked up in the last few years?
Vishal Sikka: I have been a lifelong student of AI. I met [AI pioneer and cognitive scientist] Marvin Minsky when I was about 20 years old. I’ve been studying this field ever since. I did my Ph.D. in AI. John McCarthy, the father of AI, was the head of my qualifying exam committee.
The field of AI goes back to 1956 when John, Marvin, Allen Newell, Herbert Simon and a few others organized a summer workshop at Dartmouth. John came up with the name “AI” and Marvin gave its first definition. Over the first 50 years, there were hills and valleys in the AI journey. The progress was multifaceted. It was multidimensional. Marvin wrote a wonderful book in 1986 called The Society of Mind. What has happened in the last 10 years, especially since 2012, is that there has been a tremendous interest in one particular set of techniques. These are based on what are called “deep neural networks.”
Neural networks themselves have been around for a long time. In fact, Marvin’s thesis was on a part of neural networks in the early 1950s. But in the last 20 years or so, these neural network-based techniques have become extraordinarily popular and powerful for a couple of reasons.
First, if I can step back for a second, the idea of neural networks is that you create a network that resembles the human or the biological neural networks.
This idea has been around for more than 70 years. However, in 1986 a breakthrough happened thanks to a professor in Canada, Geoff Hinton. His technique of backpropagation (a supervised learning method used to train neural networks by adjusting the weights and the biases of each neuron) created a lot of excitement, and a great book, Parallel Distributed Processing, by David Rumelhart and James McClelland, together with Hinton, moved the field of neural net-related “connectionist” AI forward. But still, back then, AI was quite multifaceted.
Second, in the last five years, one of Hinton’s groups invented a technique called “deep learning” or “deep neural networks.” There isn’t anything particularly deep about it other than the fact that the networks have many layers, and they are massive. This has happened because of two things. One, computers have become extraordinarily powerful. With Moore’s law, every two years, more or less, we have seen doubling of price performance in computing. Those effects are becoming dramatic and much more visible now. Computers today are tens of thousands of times more powerful than they were when I first worked on neural networks in the early 1990s.
“The hype we see around AI today will pass and make way for a more thoughtful and realistic approach.”
The second thing is that big cloud companies like Google, Facebook, Alibaba, Baidu and others have massive amounts of data, absolutely staggering amounts of data, that they can use to train neural networks. The combination of deep learning, together with these two phenomena, has created this new hype cycle, this new interest in AI.
But AI has seen many hype cycles over the last six decades. This time around, there is a lot of excitement, but the progress is still very narrow and asymmetric. It’s not multifaceted. My feeling is that this hype cycle will produce great applications and have a big impact and wonderful things will be done. But this hype cycle will die and a few years later another hype cycle will come along, and then we’ll have more breakthroughs around broader kinds of AI and more general approaches. The hype we see around AI today will pass and make way for a more thoughtful and realistic approach.
Knowledge@Wharton: What do you see as the most significant breakthroughs in AI? How far along are we in AI development?
Sikka: If you look at the success of deep neural networks or of reinforcement learning, we have produced some amazing applications. My friend [and computer science professor] Stuart Russell characterizes these as “one-second tasks.” These are tasks that people can perform in one second. For instance, identifying a cat in an image, checking if there’s an obstacle on the road, confirming if the information in a credit or loan application is correct, and so on.
With the advances in techniques — the neural network-based techniques, the reinforcement learning techniques — as well as the advances in computing and the availability of large amounts of data, computers can already do many one-second tasks better than people. We get alarmed by this because AI systems are superseding human behavior even in sophisticated jobs like radiology or legal — jobs that we typically associate with large amounts of human training. But I don’t see it as alarming at all. It will have an impact in different ways on the workforce, but I see that as a kind of great awakening.
But, to answer your question, we already have the ability to apply these techniques and build applications where a system can learn to conduct tasks in a well-defined domain. When you think about the enterprise in the business world, these applications will have tremendous impact and value.
Knowledge@Wharton: In one of your talks, you referred to new ways that fraud could be detected by using AI. Could you explain that?
Sikka: You find fraud by connecting the dots across many dimensions. Already we can build systems that can identify fraud far better than people by themselves can. Depending on the risk tolerance of the enterprise, these systems can either assist senior people whose judgment ultimately prevails, or, the systems just take over the task. Either way, fraud detection is a great example of the kinds of things that we can do with reinforcement learning, with deep neural networks, and so on.
Another example is anything that requires visual identification. For instance, looking at pictures and identifying damages, or identifying intrusions. In the medical domain, it could be looking at radiology, looking at skin cancer identifications, things like that. There are some amazing examples of systems that have done way better than people at many of these tasks. Other examples include security surveillance, or analyzing damage for insurance companies, or conducting specific tasks like processing loans, job applications or account openings. All these are areas where we can apply these techniques. Of course, these applications still have to be built. We are in the early stages of building these kinds of applications, but the technology is already there, in these narrow domains, to have a great impact.
Knowledge@Wharton: What do you expect will be the most significant trends in AI technology and fundamental research in the next 10 years? What will drive these developments?
Sikka: It is human nature to continue what has worked, so lots of money is flowing into ongoing aspects of AI. From chips, in addition to NVidia, Intel, Qualcomm etc., Google, Huawei and others are building their own AI processors and many startups are as well, and all this is becoming available in cloud platforms. There is tons of work happening in incrementally advancing the core software technologies that sit on top of this infrastructure, like TensorFlow, Caffe, etc., which are still in the early stages of maturity. And this will of course continue.
But beyond this, my sense is that there are going to be three different fronts of development. One will be in building applications of these technologies. There is going to be a massive set of opportunities around bringing different applications in different domains to the businesses and to consumers, to help improve things. We are still woefully early on this front. That is going to be one big thing that will happen in the next five to 10 years. We will see applications in all kinds of areas, and there will be application-oriented breakthroughs.
“The development of AI is asymmetric.”
Two, from a technology perspective, there will be a realization that while the technology that we have currently is exciting, there is still a long way to go in building more sophisticated behavior, building more general behavior. We are nowhere close to building what Marvin [Minsky] called the “society of mind.” In 1991, he said in a paper that these symbolic techniques will come together with the connectionist techniques, and we would see the benefits of both. That has not happened yet.
John [McCarthy] used to say that machine learning systems should understand the reality behind the appearance, not just the appearance.
I expect that more general kinds of techniques will be developed and we will see progress towards more ensemble approaches, broader, more resilient, more general-purpose approaches. My own Ph.D. thesis was along these lines, on integrating many specialists/narrow experts into a symbolic general-purpose reasoning system. I am thinking about and working on these ideas and am very excited about it.
The third area — and I wish that there is more progress on this front — is a broader awareness, broader education around AI. I see that as a tremendous challenge facing us. The development of AI is asymmetric. A few companies have disproportionate access to data and to the AI experts. There is just a massive amount of hype, myth and noise around AI. We need to broaden the base, to bring the awareness of AI and the awareness of technology to large numbers of people. This is a problem of scaling the educational infrastructure.
Knowledge@Wharton: Picking up on what you said about AI development being asymmetric, which industries do you think are best positioned for AI adoption over the next decade?
Sikka: Manufacturing is an obvious example because of the great advances in robotics, in advancing how robots perceive their environments, reason about these, and affect increasingly finer control over it. There is going to be a great amount of progress in anything that involves transportation, though I don’t think we are still close to autonomy in driving because there are some structural problems that have to be solved.
Health care is going to be transformed because of AI, both the practice of health care as well as the quality of health care, the way we build medicines, protein-binding is a great case for deep learning, personalize medicines, personalization of care, and so on. There will be tremendous improvement in financial services, where in addition to AI, decentralized/p2p technologies like blockchain will have a huge impact. Education, as an industry, will go through another round of significant change.
There are many industries that will go through a massive transformation because of AI. In any business there will be areas where AI will help to renew the existing business, improve efficiency, improve productivity, dramatically improve agility and the speed at which we can conduct our business, connect the dots, and so forth. But there will also be opportunities around completely new breakthrough technologies that are possible because of these applications — things that we currently can’t foresee.
The point about asymmetry is a broader issue; the fact that a relatively small number of companies have access to the relatively small talent of people and to massive amounts of data and computing, and therefore, development of AI is very disproportionate. I think that is something that needs to be addressed seriously.
Knowledge@Wharton: How do you address that? Education is one way, of course. Beyond that, is there anything else that can be done?
Sikka: I find it extraordinary that in the traditional industries, for example in construction, you can walk into any building and see the plans of that building, see how the building is constructed and what the structure is like. If there is a problem, if something goes wrong in a building, we know exactly how to diagnose it, how to identify what went wrong. It’s the same with airplanes, with cars, with most complex systems.
“The compartmentalization of data and broader access to it has to be fixed.”
But when it comes to AI, when it comes to software systems, we are woefully behind. I find it astounding that we have extremely critical and extremely important services in our lives where we seem to be okay with not being able to tell what happened when the service fails or betrays our trust in some way. This is something that has to be fixed. The compartmentalization of data and broader access to it has to be fixed. This is something that the government will have to step in and address. The European governments are further ahead on this than other countries. I was surprised to see that the EU’s decision on demanding explainability of AI systems has seen some resistance, including here in the valley.
I think it behooves us to improve the state of the art, develop better technologies, more articulate technologies, and even look back on history to see work that has already been done, to see how we can build explainable and articulate AI, make technology work together with people, to share contexts and information between machines and people, to enable a great synthesis, and not impenetrable black boxes.
But the point on accessibility goes beyond this. There simply aren’t enough people who know these techniques. China’s Tencent sponsored some research recently which showed that there are basically some 300,000 machine learning engineers worldwide, whereas millions are needed. And how are we addressing this? Of course there is good work going on in online education and classes on Udacity, Coursera, and others. My friend [Udacity co-founder] Sebastian Thrun started a wonderful class on autonomous driving that has thousands of students. But it is not nearly enough.
And so the big tech companies are building “AutoML” tools, or machine learning for machine learning, to make the underlying techniques more accessible. But we have to see that in doing so, we don’t make them even more opaque to people. Simplifying the use of systems should lead to more tinkering, more making and experimentation. Marvin [Minsky] used to say that we don’t really learn something until we’ve learnt it in more than one way. I think we need to do much more on both making the technology easier to access, so more people have access to it, and we demystify it, but also in making the systems built with these technologies more articulate and more transparent.
Knowledge@Wharton: What do you believe are some of the biggest bottlenecks hampering the growth of AI, and in what fields do you expect there will be breakthroughs?
Sikka: As I mentioned earlier, research and availability of talent is still quite lopsided. But there is another way in which the current state of AI is lopsided or bottlenecked. If you look at the way our brains are constructed, they are highly resilient. We are not only fraud identification machines. We are not only obstacle detection and avoidance machines. We are much broader machines. I can have this conversation with you while also driving a car and thinking about what I have to do next and whether I’m feeling thirsty or not, and so forth.
This requires certain fundamental breakthroughs that still have not been happened. The state of AI today is such that there is a gold rush around a particular set of techniques. We need to develop some of the more broad-based, more general techniques as well, more ensemble techniques, which bring in reasoning, articulation, etc.
For example, if you go to Google or [Amazon’s virtual assistant] Alexa or any one of these services out there and ask them, “How tall was the President of the United States when Barack Obama was born?” None of these services can answer this, even though they all know the answers to the three underlying questions. But a 5-year-old can. The basic ability to explicitly reason about things is an area where tremendous work has been done for the last many decades, but it seems largely lost on the AI research today. There are some signs that this area is developing, but it is still very early. There is a lot more work that needs to be done. I, myself, am working on some of these fundamental problems.
Knowledge@Wharton: You talked about the disproportionate and lopsided nature of resource allocation. Which sectors of AI are getting the most investment today? How do you expect that to evolve over the next decade? What do traditional industries need to do to exploit these trends and adapt to transformation?
Sikka: There’s a lot of interest in autonomous driving. There is also a lot of interest in health care. Enterprise AI should start to pick up. So there are several areas of interest but they are quite lumpy and clustered in a few areas. It reminds me of the parable of the guy who lost his keys in the dark and looks for them underneath a lamp because that’s where the light was.
But I don’t want to make light of what is happening. There are a large number of very serious people also working in these areas, but generally it is quite lopsided. From an investment point of view, it is all around automating and simplifying and improving existing processes. There are a few developments around bringing AI to completely new things, or doing things in new ways, breakthrough ways, but there is a disproportionate usage of AI for efficiency improvements and automation of existing businesses and we need to do more on the human-AI experience, of AI amplifying people’s work.
“There simply aren’t enough people who know these techniques.”
If you look at companies like Uber or Didi [China’s ride-sharing service] or Apple and Google, they are aware of what is going on with their consumers more or less in real time. For instance, Didi knows every meter of every car ride done by every consumer in real time. It’s the same with Uber and in China, even in physical retail as I mentioned earlier, Alibaba is showing that real-time connection to customers and integration of physical and digital experiences can be done very well.
But in the traditional world, in the consumer packaged goods (CPG) industry or in banking, telecom or retail, where customer contact is necessary, businesses are quite disconnected from what the true end-user is doing. It is not real time. It is not large-scale. Typically, CPG companies still analyze data that is several months old. Some CPG companies still get DVDs from behavioral aggregators three months later.
I think an awareness of that [lag] is building in businesses. Many of my friends who are CEOs of large companies in the CPG world, in banking, pharmaceuticals and telecom, are trying to now embrace new technology platforms that bring these next generation technologies to life. But beyond embracing technology, and deploying a few next-generation applications, my sense is, the traditional companies really need to think of themselves as technology companies.
My wife Vandana started and built up the Infosys Foundation in the U.S., and her main passion is computer science education. [She left the foundation in 2017.] She found this amazing statistic that in the dark ages some 6% of the world’s population could read and write, but if you think about computing as the new literacy, today some half a percent of the world’s population can program a computer.
We are finally approaching 90% literacy in the world, and of course we are not all writers or poets or journalists, but we all know how to write and to read, and it has to be the same way with computing and digital technologies, and especially now with AI, which is as big a shift for us as computing itself.
So businesses need to reorient themselves from “I am an X company,” to “I am a technology company that happens to be in X.” Because if we don’t, we may be vulnerable to a tech company that better sees and executes and scales on that X, as we have already seen in many industries. The iPhone wasn’t so much as a phone, as it is a computer in the shape of a phone. The Apple Watch isn’t a watch, but a computer, a smart computing service, in the shape of a watch. The Tesla is not so much an electric car, but rather a computer, an intelligent, connected, computing service, in the shape of a car. So if you are simply making your car an electric one, this is not enough.
“The iPhone isn’t so much a phone as it is a computer in the shape of a phone.”
Too often companies don’t transform, and they become irrelevant. They may not die immediately. Indeed large, successful, complex structures often outlive us humans, and die long slow deaths, but they lose their relevance to the new very quickly. Transformations are difficult. One has to let go of the past, of what we have known, and embrace something completely new, alien to us. As my friend and teacher [renowned computer scientist] Alan Kay said, “We only make progress by going differently than we believe.” And of course we have to do this as individuals as well. We have to continually learn and renew our skills, our perspectives on the world.
Knowledge@Wharton: How should companies measure the return on investment (ROI) in AI? Should they think about these investments in the same way as other IT investments or is there a difference?
Sikka: First of all, it is good that we are applying AI to things where we already know the ROI. I was talking to a friend recently, and he said, “In this particular part of my business, I have 50,000 people. I could do this work with one-fourth the people, at even better efficiency.” In such a situation, the ROI is clear. In financial services, one area that has become exciting is active trading of asset management. People have started applying AI here. One hedge fund wrote about the remarkable results it got by applying AI.
A start-up in China does the entire management of investments through AI. There are no people involved and the company delivers breakthrough results.
So, that’s one way. Applying AI to areas where the ROI is clear, where we know how much better the process can become, how much cheaper, how much faster, how much better throughput, how much more accurate, and so on. But again this is all based on the known, the past. We have to think beyond that, more broadly than that. We have to think about AI as becoming an augmentation for every one of our decisions, every one of the questions that we ask, and have that fed by data and analyzed in real time. Instead of doing generalizations or approximations, we must insist on AI amplifying all of our decisions. We must bring AI to areas where we don’t yet have ROIs clearly identified or clearly understood. We must build ROIs on the fly.
Knowledge@Wharton: How does investment in AI in the U.S. compare with China and other parts of the world? What are the relative strengths and weaknesses of the U.S. and Chinese approaches to AI development?
Sikka: I’m very impressed by how China is approaching this. It is a national priority for the country. The government is very serious about broad-based AI development, skill development and building AI applications. They have defined clear goals in terms of the size of the economy, the number of people, and the leadership position. They actively recruit [AI experts]. The big Chinese technology companies are [attracting] U.S.-based, Chinese-origin scientists, researchers and experts who are moving back there.
In many ways, they are the leaders already in building applications of AI technology, and are doing leading work in technology as well. When you think about AI technology or research, the U.S. and many European universities and countries are still ahead. But in terms of large-scale applications of AI, I would argue that China is already ahead of everybody else in the world. The sophistication of their applications, the scale, the complex conditions in which they apply these, is simply extraordinary. Another dimension of that is the adoption. The adoption of AI technology and modern technology in China, especially in rural areas, is staggering.
Knowledge@Wharton: Could you give a couple of examples of what impressed you most?
Sikka: Look at the payments space — at Alipay, WeChat Pay or other forms of payments from companies like Ping An Insurance, as well as Alibaba and Tencent. It’s amazing. Shops in rural China don’t take cash. They don’t take credit cards. They only do payments on WeChat Pay or on Alipay or others like that. You don’t see this anywhere else in the world at nearly the same scale.
Bike rentals are another example. In the past year, there has been an extraordinary development in China around bicycles.
When you walk into a Chinese city, you see tens of thousands of bicycles across the landscape — yellow ones, orange ones, blue ones. When you look at these bicycles, you think, “This is a smart bicycle.” It is another example of an intelligent, connected computing service in the shape of a bicycle. You just have to wave your phone at it with your Baidu account or your Alibaba account or something like that and you can ride the bike. It has GPS. It is fully connected. It has all kinds of sensors inside it. When you get to your destination, you can leave the bike there and carry on with whatever you need to do. Already in the last nine months, this has had a huge impact on traffic.
“The adoption of AI technology and modern technology in China, especially in rural areas, is staggering.”
If you walk into any of Alibaba’s Hema supermarkets in Beijing and Shanghai, I think they have around 20 of these already, teeming with people, they are far ahead of any retail experiences we see today in the US, including at Whole Foods. The entire store is integrated into mobile experiences, so you can wave your phone at any product on the shelf and get a complete online experience. There is no checkout, the whole experience is on mobile and automated, although there are lots of folks there to help customers. The store is also a warehouse, in fact it serves some 70% of demand from local online customers, and fulfills that demand in less than an hour.
My friend ordered a live fish from the store for dinner and it, that particular fish that he had picked on his phone, was delivered 39 minutes later. Tencent has now invested in a supermarket company. And JD has its own stores. So this is rapidly evolving. It would be wonderful to see convenience like this in every supermarket around the world in the next few years.
A more recent example is battery chargers. All across China, there are little kiosks with chargers inside. You can open the kiosk by waving your phone at it, pick up a charger, charge your phone for a couple of hours, and then drop it off at another kiosk wherever you are. What I find impressive is not that somebody came up with the idea of sharing based on connected phone chargers, but how rapidly the idea has been adopted in the country and how quickly the landscape has adapted itself to assimilate this new idea. The rate at which the generation [of ideas] happens, gets diffused into the society, matures and becomes a part of the fabric is astounding. I don’t think people outside of China appreciate the magnitude of what is going on.
When you walk around Shenzhen, you can see the incredible advances in manufacturing, electronic device manufacturing, drones and things like that. I was there a few weeks ago. I saw a drone that is smaller than the tip of your finger. At the same time, I saw a demo of a swarm of a thousand or so drones which can carry massive loads collectively. So it is quite impressive how broadly the advance of AI is being embraced in China.
“The act of innovating is the act of seeing something that is not there.”
At the other end of the spectrum, I would say that in Europe, especially in Germany, the government is much more rigorous and thoughtful about the implications of these technologies. From a broader, regulatory and governmental perspective, they seem to be doing a wonderful job. Henning Kagermann, who used to be my boss at SAP for many years, recently shared with me a report from the ethics commission on automated and connected driving. The thoughtfulness and the rigor with which they are thinking about this is worth emulating. Many countries, especially the U.S., will be well served to embrace those ideas.
Knowledge@Wharton: How does the approach of companies like Apple, Facebook, Google, Microsoft and Amazon towards AI differ from that of Chinese companies like Alibaba, Baidu, or Tencent?
Sikka: I think there is a lot of similarity, and the similarities outweigh the differences. And of course, they’re all connected with each other. Tencent and Baidu both have advanced labs in Silicon Valley. And so does Alibaba. JD, which is a large e-commerce company in China, recently announced a partnership around AI with Stanford. There’s a lot of sharing and also competitive aspects within these companies.
There are some differences. The U.S. companies are interested in certain U.S.-specific or more international aspects of things. The Chinese companies focus a lot on the domestic market within China. In many ways, the Chinese market offers challenges and circumstances that are even more sophisticated than the ones in the U.S. But I wouldn’t say that there is anything particularly different between these companies.
If you look at Amazon and Microsoft and Google, their advances, when it comes to bringing their platforms to the enterprise, are further ahead than the Chinese companies. Alibaba and Tencent have both announced ambitions to bring their platform to the enterprise. I would say that in this regard, the U.S. companies are further ahead. But otherwise, they are all doing extraordinary work. The bigger issue in my mind is the gap between all of them and the rest of the companies.
Knowledge@Wharton: Where does India stand in all of this? India has quite a lot of strengths in the IT area, and because of demonetization there has been a strong push towards digitization. Do you see India playing any significant role here?
Sikka: India is at a critical juncture, a unique juncture. If you look at it from the perspective of the big U.S. companies or the big Chinese companies, India is by far their largest market. We have a massive population and a relatively large amount of wealth. So, there is a lot of interest in all these companies, and consequently their countries, towards India and developing the market there. If that happens, then of course the companies will benefit. But it’s also a loss of opportunity for India to do its own development through educating its workforce on these areas.
One of the largest populations that could be affected by the impact of AI in the near-term is going to be in India. The impact of automation in the IT services world, or broadly in the services world, will be huge from an employment perspective. If you look at the growth that is happening everywhere, especially in India, some people call it “jobless growth.” It’s not jobless. It’s that companies grow their revenues disproportionately compared to the growth in the number of employees.
“Finding the problem, identifying the innovation — that will be the human frontier.”
There is a gap that is emerging in the employment world. Unless we fix the education problem it’s going to have a huge impact on the workforce. Some of this is already happening. One of the things I used to find astounding in Bangalore was that a lot of people with engineering degrees do freelance jobs like driving Uber and Ola cabs. And yet we have tremendous potential.
The value of education is central to us in India, and we have a large, young, generation of highly inspired youngsters ready to embrace and shape the future, who are increasingly entrepreneurial in their outlook. So we have to build on foundations like the “India stack,” we have to build our own technological strengths, from research and core technology to applications and services. And a redoubling of the focus on education, on training massive numbers of people on technologies of the future, is absolutely critical.
So, in India, we are at this critical juncture, where on one hand there is a massive opportunity to show a great way forward, and help AI be a great amplifier for our creativity, imagination, productivity, indeed for our humanity. On the other hand, if we don’t do these things, we could be victims of these disruptions.
Knowledge@Wharton: How should countries reform their education programs to prepare young people for a future shift by AI?
Sikka: India’s Prime Minister Narendra Modi has talked about this a lot. He is passionate about this idea of job creators, not just job seekers, and about a broad culture of entrepreneurship.
I’m an optimist. I’m an entrepreneur. I like to see the opportunity in what we have, even though there are some serious issues when it comes to the future of the workforce. My own sense is that in the time of AI, the right way forward for us is to become more evolved, more enlightened, more aware, more educated, and to unleash our imagination, to unleash our creativity.
John McCarthy was a great teacher in my life. He used to say that articulating a problem is half its solution. I believe that in our lifetime, certainly in our children’s lifetime, we will see AI technology advance to the point where any task, any activity, any job, any work that can be precisely formulated and precisely articulated, will be done automatically, far better than we can do with our senses and our muscles. However, articulating the problem, finding the problem, identifying the innovation — that will be the human frontier. It is the act of seeing something that is not there. The act of exercising our creativity. And then, using AI to become a great amplifier, to help us achieve our imagination, our vision. I think that is the great calling of our time. That is my great calling.
Five or six hundred million years ago, there was this unusual event that happened geologically. It was called the Cambrian explosion. It was the greatest creation of life in the history of our planet. Before that, the Earth was basically covered by water. Land had started to emerge, and oxygen had started to emerge. Life, as it existed at that point, was very primitive. People wondered, “How did the Cambrian explosion happen? How did all these different life forms show up in a relatively small period of time?”
What happened was that the availability of oxygen, the availability of land, and the availability of light as a provider of life, as a provider of living, created a situation which formed all these species that had the ability to see. They all came out of the dark, out of the water, onto the land, into the air, where opportunities were much more plentiful, where they could all grow, they could all thrive. People wonder, “What were they looking for?” It turns out they were looking for light. The Cambrian explosion was about all these species looking for light.
When I think about the future, about the time in front of us, I see another Cambrian explosion. The act of innovating is the act of seeing something that is not there. Our eyes are programmed by nature to see what is there. We are not programmed to see what is not there. But when you think about innovation, when you think about making something new, everything that has ever been innovated was somebody seeing something that was not there.
I think the act of seeing something that is not there is in all of us. We can all be trained to see what is not there. It is not only a Steve Jobs or a Mark Zuckerberg or a Thomas Edison or an Albert Einstein who can see something that is not there. I think we can all see some things that are not there. To Vandana’s statistic, we should strive to see a billion entrepreneurs out there. A billion-plus computer literate people who can work with, even build, systems that use AI techniques, and who can switch their perspective from making a living to making a life.
When I was at Infosys, we trained 150,000 people on design thinking for this reason: To get people to become innovators. In our lifetime, all the mechanical, mechanizable, repeatable things are going to be done way better by machines. Therefore, the great frontier for us will be to innovate, to find things that are not there. I think that will be a new kind of Cambrian explosion. If we don’t do that, humanity will probably end.
Paul MacCready, one of my heroes and a pioneer in aerospace engineering, once said that if we don’t become creative, a silicon life form will likely succeed us. I believe that it is in us to refer back to our spirituality, to refer back to our creativity, our imagination, and to have AI amplify that. I think this is what Marvin [Minsky] and John [McCarthy] were after and it behooves us to transcend the technology. And we can do that. It is going to be tough. It is going to require a lot of work. But it can be done. As I look at the future, I am personally extremely excited about doing something in that area, something that fundamentally improves the world.
View at the original source
Behavioral science has become a hot topic in companies and organizations trying to address the biases that drive day-to-day decisions and actions.
Although humans are known to be irrational, they are at least irrational in predictable ways. In this episode of the McKinsey Podcast, partner Julia Sperling, consultant Magdalena Smith, and consultant Anna Güntner speak with McKinsey Publishing’s Tim Dickson about how companies can use behavioral science to address unconscious bias and instincts and manage the irrational mind. Employing techniques such as “nudging” and different debiasing methods, executives can change people’s behavior—and have a positive effect on business—without restricting what people are able to do.
Hotels enjoy their highest profits when rooms are most in demand, like during holidays and big events. Unfortunately for them, Airbnb is taking away some of that pricing power, according to new research by Chiara Farronato and Andrey Fradkin.
additional rooms available in the country's hottest travel spots during peak periods when hotel rooms often sell out and rates skyrocket, a new study shows.
"You might find a Fifth Avenue apartment or a place by the beach at a more reasonable price than you would if Airbnb wasn't an option"
Airbnb's rapid growth
Hotels fight back
"When the pope comes to Philly, and hotel prices are $200, it becomes worth your while to put your spare room out for rent"
Here's what the professor of Department of History told IANS:
Traditional universities — including Ivy League schools — fail to deliver the kind of learning that ensures employability. That perspective inspired Ben Nelson, founder and CEO of the six-year-old Minerva Schools in San Francisco. His goal is to reinvent higher education and to provide students with high-quality learning opportunities at a fraction of the cost of an undergraduate degree at an elite school. While tuition at top-tier universities in the U.S. can run more than $40,000 a year, Minerva charges $12,950 a year, according to its website. In a recent test, its students showed superior results compared to traditional universities while also attracting a large number of applicants.
Minerva is a disruptor and the traditional university establishment needs to adapt to its model and perhaps improve on it, according to Jerry (Yoram) Wind, emeritus marketing professor at Wharton. Nelson, who was previously president of Snapfish, an online photo hosting and printing service, and Wind spoke to Knowledge@Wharton about why the higher education model needs to change, and how the Minerva model could help.
An edited transcript of the conversation follows.
Knowledge@Wharton: Jerry, where is the future of education headed?
Jerry Wind: The future is now. It has been here for a while, and with Minerva, Ben has recreated the university of the future. Ben, describe briefly the Minerva concept, and then go into the recent findings of the CLA report (Minerva’s Collegiate Learning Assessment test).
Ben Nelson: We refer to Minerva as having been built as an “intentional university.” Everything about the design of the institution, what we teach, how we teach and where we teach it is based on what we know, and through empirical evidence, is effective.
In what we teach, we are classical in our approach, even though we’re [also] modern and progressive in the way we teach. For example, if you think about the purpose of a liberal arts education, or what the great American universities purport to teach, they will say ‘We teach you how to think critically, how to problem-solve, how to think about the way the world works and to be global, and how to communicate effectively.
“Universities … basically teach you academic subject matter and they hope you pick up all of the other stuff by accident.”
When you actually look at how universities attempt to do it, they basically teach you academic subject matter and they hope you pick up all of the other stuff by accident.
We decided to have a curriculum that teaches these things, that breaks down critical thinking, creative thinking, effective interactions, and effective communications into component parts. [We wanted to make] sure that we don’t just teach them conceptually, and don’t just teach them in a context, but actually explain the concept and then have our students apply them actively from context to context to context.
Knowledge@Wharton: Could you share an example of how you do that?
Nelson: One aspect of critical thinking, for example, is evaluating claims. There are various ways of evaluating claims. Sometimes you use logic, sometimes you use reasoning, which is different than logic, sometimes you do statistical analysis which is different than the other two, and sometimes you just think of a counter example.
Now there are different [types] of critical thinking. One example: making a decision tradeoff. Should we go down Path A or Path B? The technique for making a decision tradeoff is perhaps thinking through the cost-benefit analysis, which is a type of critical thinking.
If you say ‘I’m going to teach you critical thinking’ and you just try to teach it as a thing you will never succeed. [It is important to] go through it systemically and do the component parts – that’s the first aspect.
The second aspect is if you teach a person an idea, say evaluation of claims, the mind gets trained in a particular context. When somebody makes a claim, let’s say on an investment opportunity, or a political claim, the mind doesn’t really transfer those skills from one field to another. This is one of the fundamental problems of transferrable education. The way that you teach that is to provide exercise and applications in multiple fields.
How we teach is also radically different. The science of learning shows that the dissemination of information [through] lectures and test-based methodology simply doesn’t work. Six months after the end of a traditional lecture and test-based class, 90% of the material you were supposed to have learned is gone from your mind. In an active learning environment you struggle through information, and two years after the end of the class you retain 70%.
All of our classes, despite [being] small seminars with 15 to 19 students at a time, are done via live video online where there’s a camera pointed at every student’s face. The students are actively engaged with the materials, [and it is] not the professor lecturing — professors are not allowed to talk for more than four minutes at a time. The students get feedback on how they apply what they [learn].
“Six months after the end of a traditional lecture and test-based class, 90% of the material you were supposed to have learned is gone from your mind.”
Lastly [it is about] where we teach. We have created a university that takes advantage of the best the world has to offer. Being a Penn graduate, I always gravitated towards the idea of the urban campus. Our students live in the heart of cities in residence halls together, and have a very strong community. They spend their first year in the heart of San Francisco, but over the next three years across six semesters, as a cohort, as a group, they will travel and live in six different countries. So in their second year they go to Seoul and Hyderabad, and then to Berlin and Buenos Aires, then London and Taipei, and come back to San Francisco for a month to manifest their education and graduate.
Wind: While the concept is appealing, does it work? Describe the CLA test, and then talk about the implications of [your approach].
Nelson: The Collegiate Learning Assessment is provided by a third party nonprofit that has been testing and assessing students’ progress on critical thinking, problem-solving, scientific reasoning and effective communication skills for many years. It’s been administered to hundreds of thousands of students across hundreds of universities. It is administered to students at the beginning of their first year and at the end of their fourth year, and so you can measure [the] progress of students.
We provided [our students] the first-year test just before they started the first class at the beginning of the year. But rather than waiting four years, we gave our students the fourth-year test at the end of their first year, Eight months later, the results shocked us. Not only did our students after eight months have the highest composite score in the country compared to any other university that was assessing their students, the delta improvement they accomplished was higher than what the CLA has seen any university accomplish over four years.
Knowledge@Wharton: What drove those results?
Nelson: The silly answer would be to say, ‘Oh we’re brilliant and we’re great, and look at how amazing what we do is.’ The fact of the matter is we’ve got a lot of room to grow and improve. These results in many ways are much more damning of the existing system than they are generating praise for our brilliance.
We have taken publicly available scientifically published data on how the mind works. We’ve broken down the things that every university says that they teach or that they want to teach, and merely spent time putting together a curriculum that does that, and we’ve offered it to students. We’ve just done what anybody who would rationally approach trying to create a solution to a problem do.
I would bet you that if you had 100 institutions or 100 groups of people that were to do the same thing we would have done from scratch, we would have probably been better than some of them, maybe most of them, but not all of them. There would be some that on their first try would be even better than [us].
Wind: This is the value of idealized design. As opposed to trying to fix the current educational system by adding another course or trying to create a cross-disciplinary course, [Minerva] reexamines the whole purpose of education.
They didn’t go far enough, which is they are still within an academic context, and probably they will relax the academic context that is [with] semesters and the like, and get even better results. But even within this academic context and constraints, what they have done is amazing – the curriculum, the concept, and the way it’s developed for the benefit of the learner, and not the benefit of the faculty.
The [first] implication is, if you had a choice and you wanted to go to a university now, where would you go? If you want really great education, go to Minerva; [but if] you want to network, go to one of the top five schools — Penn, Harvard, Princeton, Yale and MIT. Minerva offers probably a different network than the traditional ones because it is a network of people who are willing to do it.
Nelson: Last year, for our third class ever, we received 20,400 applications. That is more applicants than MIT or Dartmouth got. The network you get in a Wharton or Harvard or Yale or what-have-you is [of] a certain kind. It is overwhelmingly American, [with] 80% or 90% from the U.S., and usually from particular socioeconomic backgrounds. Even though there is some diversity, it’s heavily weighted [in favor of that profile].
The Minerva network is radically different because 80% of our students are not from the U.S. — they come from 61 countries. We received these 20,000 applications from 179 countries. The experience and the network you build as you travel and live as a resident in these seven countries is unparalleled. If you want a global footprint, that’s what we provide.
Wind: The current educational system does not work. Implication two is that [universities] have to realize that they are being disrupted. At this stage [it is on a] small scale, but if other universities start adopting it, it can [become] large scale. [Minerva is] the disruptor here, and the signal to the legacy universities is, our model does not work. Stop trying to fix it by adding another Band-Aid, but try to rethink the educational system. And here you have a wonderful blueprint that works.
Nelson: We just wrote a book called Building the Intentional University, which is a blueprint for how other universities can create their own Minervas or reform in that sense. We are a residential university that grants undergraduate degrees with 120 credit hours, with majors and minors and electives and a general education curriculum. We are plug-and-play for universities. We offer potential salvation from disruption.
“The future is now. It has been here for a while, and with Minerva, Ben has recreated the university of the future.”
–Jerry (Yoram) Wind
What I have worried about is the other kind of disruptive force that can attack universities [and be] destructive, in the sense that in six months you get a high school degree, go to a boot camp and then get a six-figure job being a software programmer. We have put together an educational experience that enables university graduates to be better prepared than [with that] six-month boot camp. Because they are able to do higher level problem solving, they are going to be [software] architects as opposed to the programmers. They’re going to the ones that in a world of Watson and artificial intelligence and outsourcing are going to be much more future-proof.
Wind: An increasing number of people view employability as being critical, and a traditional university degree does not guarantee employability, [but] the new non-degree programs guarantee you a [job] position.
Knowledge@Wharton: Three or four years ago, a big potential disruptor was the so-called MOOC, or the Massive Open Online Course. A number of platforms came up [such as] Coursera, Udacity and EdX. It seemed like they were going to be disruptive, but that doesn’t seem to have happened. What happened with that so-called disruption and why did it fail?
Nelson: The jury is still somewhat out on that, and let me give you an example of what I think is happening on the surface. MIT had a master’s program in supply chain logistics, and it cost $60,000 for a two-semester program. As an experiment, [they put the] first semester on MOOCs, and rather than charging $30,000 for it, [gave] it away for free. If you want to get credit for it pay $250, [write] an exam, and then if you score well you [go] to campus, do a one-semester supplement, pay $30,000 and get a master’s degree.
This [halves] the cost of higher education for a master’s degree. Imagine if the Ivy League – or any university – [extended that to] all the courses they give academic credit for. Of the $250,000 that they are used to collecting and are reliant on [for each degree course, they] can only collect $100,000 because $150,000 is effectively given away for free. So far no university has an incentive to rock the boat too much on this. [However,] just because the disruption does not happen immediately doesn’t mean it won’t happen.
Wind: The concern is that especially for the leading universities, it’s an excuse not to innovate. They are saying, ‘Look how innovative we are; we have MOOCs, or we offer classes on Coursera,’ and basically the rest of the education stays exactly the same way as it was before. Some of the findings suggest that less than 5% of the people who start ever finish the courses on Coursera or EdX. But there are some encouraging signs that if you add to the traditional Coursera course or EdX interaction, and if you provide some more gamification principles in terms of getting involved, you can increase the numbers significantly.
The advantage of this — with MIT, Stanford, Penn and other universities putting all of these courses online — is that the role of the faculty becomes easier as a curator. This is the fundamental change that we have to see in education.
Knowledge@Wharton: [In addition to] a network, one other factor that the Ivy League universities offer is the brand. When you have this innovative model like Minerva, how do you establish a brand that is acceptable to students as well as employers?
Nelson: Minerva was built as a positive brand. When you meet somebody at Minerva you know that they have … been given systematic frameworks of analysis that they can apply effectively to the rest of the world. Our challenge is to propagate that brand, to get people aware of it. The good news is that the internet is a very good way of disseminating information. Brand building in today’s world doesn’t take centuries; it doesn’t even take decades.
Wind: The final word on branding is always [from] the consumer. One, the best carrier of the brand, and especially on the positive side, would be the alumni. So the value of the degree, the value of the Minerva experience is a function of how good the alumni are. Two, a lot [depends] of the employability and demand for the Minerva students.
Nelson: It’s too early to tell.
Building the bacterial wall: The blue balls are wall-making proteins. The yellow represents a newly synthesized bacterial cell wall. The green color represents "scaffolding" proteins. Video: Janet Iwasa for Harvard Medical School..
They’re doing so because of a high-profile series of failures of automation, which have prompted a wave of intense pressure from investors, the public, and governments.
Tesla’s highly automated production line failed to produce cars at the rate CEO Elon Musk promised, prompting questions about the electric-car maker’s solvency. Systems at Google’s YouTube failed to flag extremist and exploitative videos. Russian operatives have worked to influence elections using Facebook, whose systems separately created categories of users with labels such as “Jew hater” that it then allowed advertisers to target.
While companies such as Google and Facebook still insist that they’re just distribution platforms rather than content creators and bear limited, if any, responsibility for most of the content they host, they’re increasingly acknowledging they need to do something to curb abuses. In the short-term at least, that approach usually involves more humans.
“Human are underrated,” tweeted Musk, as the company struggles to ramp up production of its Model 3 sedan. Musk has blamed an overly automated production process. “We had this crazy, complex network of conveyor belts… And it was not working, so we got rid of that whole thing,” he told CBS.
Meanwhile, Google and Facebook have been hiring thousands of people to monitor content and advertising on their platforms, amid backlash against their hosting of extremist videos and messages, videos depicting the exploitation of children, propaganda, and content created to manipulate electorates in the US and elsewhere.
Facebook CEO Mark Zuckerberg reiterated to US legislators last week that the company planned to double its security and content moderation workers to 20,000 people by the end of the year—an investment that he acknowledged would hurt its profitability.
YouTube CEO Susan Wojcicki in December said the Google-owned video site aimed to have 10,000 people working to find and combat content that violates its policies, a 25% increase according to BuzzFeed.
Artificial-intelligence experts say Zuckerberg and other tech executives are over-optimistic about the timeline for computers identifying things such as toxic speech, and point to existing systems that fail at that task. A new Barclays research report says that humans are better than robots at “sensorimotor skills” and “cognitive functionality,” meaning humans are less clumsy than robots and are better at making decisions factoring in context and in cases where there’s incomplete information. There are reasons to be confident that humans will retain some of those advantages for decades into the future.
But any surge in hiring by tech companies is unlikely to significantly offset the toll on employment from the current wave of automation. And the jobs that such companies are hiring for at scale—such as people to watch videos for offensive content—tend to require lower skills, and pay lower wages.
Finding Five Unknown Variables
“The Smartest Immunologists I Know”
Outliers No More
Fluorescent-labeled cells used to train neural networks. Image: Allen Institute.
New 3D models of living human cells generated by machine-learning algorithms are allowing scientists to understand the structure and organization of a cell's components from simple microscope images.
Why it matters: The tool developed by the Allen Institute for Cell Science could be used to better understand how cancer and other diseases affect cells or how a cell develops and its structure changes — important information for regenerative medicine.
"Each cells has billions of molecules that, fortunately for us, are organized into dozens of structures and compartments that serve specialized functions that help cells operate," says Allen Institute's Graham Johnson, who helped develop the new model.
What they did: The researchers used gene editing to label the nucleus, mitochondria and other structures inside live human induced pluripotent stem cells (iPSC) with fluorescent tags and took tens of thousands of images of the cells.
They then used those images to train a type of neural network known as Generative Adversarial Networks (GANs). That yielded a model that can predict the most likely shape of the structures and where they are in cells based on just the cell's plasma membrane and nucleus.
Using a different algorithm, they created a model that can take an image of a cell that hasn't been fluorescent-labeled — in which it's difficult to distinguish the cell's components ("it looks like static on an old TV set," Graham Johnson says) — and find the structures.
What they found: When they compare the predicted image to actual labeled ones, the Allen Institute researchers said they are nearly indistinguishable.
The advance: Gene editing and fluorescent dyes often used to study cells only allow a few components to be visualized at once and can be toxic, limiting how long researchers can observe a cell.
Plus, "knowledge gained from more expensive techniques or ones that take a while to do and do well can be inexpensively applied to everyone’s data," says the Allen Institute's Greg Johnson, who also worked on the tool. "This provides an opportunity to democratize science."
View at the original source
The Science of Nootropics
Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. They’re a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics.
However, there are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers. Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team including Sara Adães, who has a PhD in neuroscience and Jon Wilkins, a Harvard PhD in biophysics.
Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.
Most transformative medicines originate in curiosity-driven science, evidence says....
Would we be wise to prioritize “shovel-ready” science over curiosity-driven, fundamental research programs? In the long term, would that set the stage for the discovery of more medicines?
To find solid answers to these questions, scientists at Harvard and the Novartis Institute for Biomedical Research (NIBR), publishing in Science Translational Medicine, looked deep into the discovery of drugs and showed that, in fact, fundamental research is “the best route to the generation of powerful new medicines.”
“The discoveries that lead to the creation of a new medicine do not usually originate in an experiment that sets out to make a drug. Rather, they have their origins in a study — or many studies — that seek to understand a biological or chemical process,” said Mark Fishman, one of three authors of the study. “And often many years pass, and much scientific evidence accumulates, before someone realizes that maybe this work holds relevance to a medical therapy. Only in hindsight does it seem obvious.”
Fishman is a professor in the Harvard Department of Stem Cell and Regenerative Biology, a faculty member of the Harvard Stem Cell Institute, and former president of NIBR. He is a consultant for Novartis and MPM Capital, and is on the board of directors of Semma Therapeutics and the scientific advisory board of Tenaya Therapeutics.
CRISPR-cas9 is a good example of discovery biology that opened new opportunities in therapeutics. It started as a study of how bacteria resist infection by viruses. Scientists figured out how the tools that bacteria use to cut the DNA of an invading virus could be used to edit the human genome, and possibly to target genetic diseases directly.
The origins of CRISPR-Cas9 were not utilitarian, but those discoveries have the potential to open a new field of genomic medicine.
Blood pressure medicines would never have been created without the discovery of the role of renin (a renal extract) in regulating blood pressure in 1898.
Blood pressure medication is another example of how fundamental discoveries can lead to transformative medicines.
People who suffer from high blood pressure often take drugs that act by blocking the angiotensin-converting enzyme. Those medicines would never have been created without the discovery of the role of renin (a renal extract) in regulating blood pressure in 1898, or without the discovery of angiotensin in 1939, or without the solid understanding of how the enzyme works, shown in 1956.
This work was not tied earlier to making pills for hypertension, mainly because hypertension was generally believed to be harmless until the 1950s, when studies showed its relationship to heart disease. Before then, the control of blood pressure was itself a fundamental science, beginning with Stephen Hales’ measurement of blood pressure in a horse in 1733.
The discovery of ACE inhibitors really reflects the convergence of two fields of fundamental, curiosity-driven discovery.
Yet some observers believe that projects that can demonstrate up front that they could produce something useful should take priority over projects that explore fundamental questions. Would there be many more medicines if academics focused more on programs with practical outcomes? How would that shift affect people in the future?
To find answers, Fishman and his colleagues investigated the many scientific and historical paths that have led to new drugs. The study they produced is a contemporary look at the evidence linking basic research to new medicines.
The authors used a list of the 28 drugs defined by other scientists as the “most transformative” medicines in the United States between 1985 and 2009. The group examined:
Whether the drug’s discovery began with an observation about the roots of disease;
Whether the biologist believed that it would be relevant to making a new medicine; and
How long it took to realize that.
To mitigate bias, the researchers repeatedly corroborated the assignment with outside experts.
They found that eight out of 10 of the medicines on their list led back to a fundamental discovery — or series of discoveries — without a clear path to a new drug.
The average time from discovery to new drug approval was 30 years, the majority of which was usually spent in academia, before pharmaceutical or biotechnology companies started the relevant drug development programs.
Fishman concluded, “We cannot predict which fundamental discovery will lead to a new drug. But I would say, from this work and my experiences both as a drug discoverer and a fundamental scientist, that the foundation for the next wave of great drugs is being set today by scientists driven by curiosity about the workings of nature.”
What industry and academic leaders say..
Leaders in biomedicine from industry, business, and academia warmly welcome this new body of evidence, as it supports the case for funding curiosity-driven, non-directed, fundamental research into the workings of life.
“This perspective on drug discovery reminds all of us that while many in both industry and academia have been advocating for a more rational approach to R&D, the scientific substrate we depend on results from a less than orderly process. The impact of basic research and sound science is often unpredictable and underestimated. With several telling examples, the authors illustrate how they can have a ripple effect through our field.”
– Jean-François Formela, M.D., Partner, Atlas Venture...
“The paper presents a compelling argument for investing in fundamental, curiosity-driven science. If it often takes decades to recognize when a new discovery should prompt a search for targeted therapeutics, we should continue to incentivize academic scientists to follow their nose and not their wallets.”
– George Daley, M.D., Ph.D., Dean of the Faculty of Medicine, Caroline Shields Walker Professor of Medicine, and Professor of Biological Chemistry and Molecular Pharmacology at Harvard Medical School
“There is a famous story of a drunk looking for his lost keys under a streetlight because the light is better there. As Mark reminds us, if we only look for cures where the light has already shone, we will make few if any new discoveries. Basic research shines a light into the dark corners of our understanding, and by that light we can find wonderful new things.”
— Dr Laurie Glimcher, M.D., President and CEO of the Dana-Farber Cancer Institute and Richard and Susan Smith Professor of Medicine at Harvard Medical School
“The importance of fundamental discovery to advances in medicine has long been a central tenet of academic medicine, and it is wonderful to see that tenet supported by this historical analysis. For those of us committed to supporting this pipeline, it is a critical reminder that young scientists must be supported to pursue out-of-the-box questions and even new fields. In the end, that is one of the key social goods that a research university provides to future generations.”
— Katrina Armstrong, M.D., M.S.C.E., Physician-in-Chief, Department of Medicine, Massachusetts General Hospital
“Human genetics is powering important advances in translational medicine, opening new doors to treatments for both common and rare diseases at an increasingly rapid pace. Yet, these discoveries still require fundamental, basic scientific understanding into the drug targets’ mechanism of action. In this way, the potential of the science can be unlocked through a combination of curiosity, agility, and cross-functional collaboration to pursue novel therapeutic modalities like gene and cellular therapies, living biologics, and devices. This paper illustrates the value of following the science with an emphasis on practical outcomes and is highly relevant in today’s competitive biopharmaceutical environment, where much of the low-hanging fruit has already been harvested.”
– Andy Plump, M.D., Ph.D., Chief Medical and Scientific Officer, Takeda Pharmaceutical Co.
“Medicine depends on scientists asking questions, collectively and over generations, about how nature works. The evidence provided by Fishman and colleagues supports an already strong argument for continued and expanded funding of our nation’s primary source of fundamental science: the NIH and the NSF.”
– Douglas Melton, Ph.D., Xander University Professor at Harvard, Investigator of the Howard Hughes Medical Institute, and co-director of the Harvard Stem Cell Institute
“Just as we cannot translate a language we do not understand, translational medicine cannot exist without fundamental insights to be converted into effective therapies. In their excellent review, Fishman and his colleagues bring the factual evidence needed to enrich the current debate about the optimal use of public funding of biomedical research. The product of public research funding should be primarily fundamental knowledge. The product of industrial R&D should be primarily transformative products based on this knowledge.”
— Elias Zerhouni, M.D., President Global R&D Sanofi, former Director of the National Institutes of Health, 2002-2008
“Fundamental research is the driver of scientific knowledge. This paper demonstrates that fundamental research led to most of the transformative medicines approved by the FDA between 1985 and 2009. Because many genes and genetic pathways are evolutionarily conserved, discoveries made from studies of organisms that are highly tractable experimentally, such as yeasts, worms, and flies, have often led to and been integrated with findings from studies of more complex organisms to reveal the bases of human disease and identify novel therapeutic targets.”
– H. Robert Horvitz, Nobel Laureate; David H. Koch Professor, Member of the McGovern Institute for Brain Research and of the David H. Koch Institute for Integrative Cancer Research, and Howard Hughes Medical Institute Investigator at Massachusetts Institute of Technology
“This meticulous and important study of the origin of today’s most successful drugs finds convincingly that the path to discovery lies through untargeted fundamental research. The authors’ clear analysis is an effective counter to today’s restless investors, academic leaders, and philanthropists, whose impatience with academic discovery has itself become an impediment to the conquest of disease.”
— Marc Kirschner, John Franklin Enders University Professor, Department of Systems Biology, Harvard Medical School
“Some ask if there is a Return on Investment (ROI) in basic biomedical research. With transformative therapies as the ‘R,’ this work traces the path back to the starting ‘I,’ and repeatedly turns up untargeted academic discoveries — not infrequently, two or more that are unrelated to each other. Conclusion? A nation that wants the ‘R’ to keep coming must maintain, or better, step up the ‘I’: that is, funding for curiosity-driven, basic research.”
View at the original source
Despite its promise, a lack of spatial-temporal context is one of the challenges to making the most of single-cell analysis techniques. For example, information on the location of cells is particularly important when looking at how a common form of early-stage breast cancer, called ductal carcinoma in situ (DCIS) progresses to a more invasive form, called invasive ductal carcinoma (IDC). “Exactly how DCIS invasion occurs genomically remains poorly understood,” said Nicholas Navin, Ph.D., associate professor of Genetics at the University of Texas MD Anderson Cancer Center. Navin is a pioneer in the field, developing one of the first methods for scDNA-seq.
Cellular spatial data is critical for knowing whether tumor cells are DCIS or IDC. So, Navin developed topographical single-cell sequencing (TSCS). Navin and a team of researchers published their findings in February 2018 in Cell. “What we found was that, within the ducts, mutations had already occurred and had generated multiple clones and those clones migrated into the invasive areas,” Navin said.
Navin and his colleagues are also using single-cell techniques to study how triple-negative breast cancer, becomes resistant to the standard from of treatment for the disease, neo-adjuvant chemotherapy. In that work, published in an April 2018 online issue of Cell, using scDNA-seq and scRNAseq, Navin and his colleagues found responses to chemotherapy were pre-existing, thus adaptively selected. However, the expression of resistant genes was acquired by subsequent reprogramming as a result of chemotherapy. “Our data raise the possibility of therapeutic strategies to overcome chemoresistance by targeting pathways identified in this study,” Navin said.
The authors of research published in 2017 in Genome Biology also identified lineage tracing as one of the technologies that will “likely have wide-ranging applications in mapping developmental and disease-progression trajectories.” In March researchers published an online study in Nature in which they combined single-cell analysis with a lineage tracing technique, called GESTALT (genome editing of synthetic target arrays for lineage tracing), to define cell type and location in the juvenile zebrafish brain.
The combined technique, called scGESTALT, uses CRISPR-Cas9 to perform the lineage tracing and single-cell RNA sequencing to extract the lineage records. Cas9-induced mutations accumulate in a CRISPR barcode incorporated into an animal’s genome. These mutations are passed onto daughter cells and their progenies over several generations and can be read via sequencing. This information has allowed researchers to build lineage trees. Using single-cell analysis, the team could then determine the diversity of cell types and their lineage relationships. Collectively, this work provided a snapshot of how cells and cell types diverge in lineages as the brain develops. “Single-cell analysis is providing us with a lot of information about small differences at cell type-specific levels, information that is missed when looking at the tissue-wide level,” said Bushra Raj, Ph.D., a postdoctoral fellow in Alex Schier’s lab at Harvard University and first author on the paper.
Raj’s collaborators included University of Washington’s Jay Shendure, Ph.D., and Harvard Medical School’s Allon Klein, Ph.D., pioneers in the field of single-cell analysis. The team sequenced 60,000 cells from the entire zebrafish brain across multiple animals. The researchers identified more than 100 cell types in the juvenile brain, including several neuronal types and subtypes in distinct regions, and dozens of marker genes. “What was unknown was the genetic markers for many of these cell types,” Raj explained. “This work is a stepping stone,” she added. “It’s easy to see how we might one day compare normal gene–expression maps of the brain and other organs to help characterize changes that occur in congenital disease or cancer.”
Raj credits single-cell analysis with accelerating the field of developmental biology.
“People have always wanted to work at the level of the cell, but the technology was lacking,” she said. “Now that we have all of these sequenced genomes, and now that we have these tools that allow us to compartmentalize individual cells, this seems like the best time to challenge ourselves as researchers to understand the nitty-gritty details we weren’t able to assay before.”
A gold leaf paint and ink depiction of the Plasmodium falciparum lifecycle by Alex Cagan.
Human disease-relevant scRNA-seq is not just for vertebrates. For example, a team of researchers at the Wellcome Sanger Institute are working on developing a Malaria Cell Atlas. Their goal is to use single-cell technology to produce gene activity profiles of individual malaria parasites throughout their complex lifecycle. “The sequencing data we get allows us to understand how the parasites are using their genomes,” said Adam Reid, Ph.D., a senior staff scientist at the Sanger. In March 2018, the team published the first part of the atlas, detailing its results for the blood stage of the Plasmodium lifecycle in mammals. Reid contends these results will change the fight against malaria. “Malaria research is a well-funded and very active area of research. We’ve managed to get quite a bit of understanding of how the parasite works. What single-cell analysis is doing is allowing us to better understand the parasite in populations. We thought they were all doing the same thing. But, now we can see they are behaving differently.”
The ability to amplify very small amounts of RNA was the key innovation for malaria researchers. “When I started doing transcriptome analysis 10 years ago, we needed to use about 5 micrograms of RNA. Now, we can use 5 pico grams, 1 million times less,” Reid said. That innovation allows scientists like Reid to achieve unprecedented levels of resolution in their work. For Reid, increased resolution means there is hope that science will be able to reveal how malaria evades the immune system in humans and how the parasites develop resistance to drugs. Reid predicted the Atlas will serve as the underpinning for work by those developing malaria drugs and vaccines. “They will know where in the life cycle genes are used and where they are being expressed,” he said. Drug developers can then target those genes. The Atlas should be complete in the next two years, Reid added.
In the meantime, Reid and his colleagues are focused on moving their research from the lab to the field, particularly to Africa. “We want to look at these parasites in real people, in real settings, in real diseases states,” he explained. Having access to fresher samples is one reason to take the research into the field. “The closer we can get to the disease, the better chance we have of making an impact.” Reid anticipates that RNA-seq technology is on the verge of being portable enough to go into the field (see Preparing scRNA-seq for the Clinic & the Field). Everything from instrumentation to software is developing rapidly, he said. Reid also said that the methods used to understand the malaria parasite will likely be used to understand and create atlases for other disease vectors.
It is clear to those using single-cell analysis in basic research that the path ahead includes using the techniques in the clinic. “As the technologies become more stable, there will be a lot of opportunities for clinical applications,” Navin said. These include early detection by sampling for cancer markers in urine, prostate fluid, and the like. It also includes non-invasive monitoring of rare circulating tumor cells, as well as personalizing treatment decisions using specific markers. These methods will be particularly useful in the case of samples that today would be labeled QNS, or ‘quantity not sufficient.’ “Even with QNS samples, these methods allow you to get high-quality datasets to guide treatment decisions.”