Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

Channel Description:

Best content from the best source handpicked by Shyam. The source include The Harvard University, MIT, Mckinsey & Co, Wharton, Stanford,and other top educational institutions. domains include Cybersecurity, Machine learning, Deep Learning, Bigdata, Education, Information Technology, Management, others.

older | 1 | .... | 54 | 55 | (Page 56) | 57 | 58 | .... | 82 | newer

    0 0

    How machine learning will fuel huge innovation over the next 5 years  

    Machine learning is coming into a golden age, and with it we’re seeing an awakening of possibilities formerly reserved for science fiction.
    Machine learning (ML) is a computer’s way of learning from examples, and it’s one of the most useful tools we have for the construction of artificial intelligence (AI). It begins with the design of an algorithm that learns from collected data, creating machines that in most cases become smarter as data volumes intensify.
    We’ve seen a breakthrough in the field of ML in the last five years in part due to the recent wealth of big data streams provided from high-speed internet, cloud computing, and widespread smartphone usage, leading to the birth of the now popular “deep learning” algorithms. Heavily- used applications that have emerged with ML at their core include recommendation systems like those from Netflix and Amazon, face recognition technology as seen in Facebook, email spam filters like those from Google and Microsoft, and speech recognition systems such as Siri.
    While the depth of advancement is unknown, what we can say with high certainty is that development in this field in the past five years will be nothing compared to what we’re going to see in the five years to come. Based on machine learning’s current state, here are four predictions of what we could see in the near future:
    Image-Based Recognition: The technology for image and video-based recognition is on the horizon, and with it a whole new experience for users. Thanks to deep learning, we are now at the dawn of computers recognizing images, and the people and actions within them, with high accuracy based on the image alone and with minimum reliance on external data. It’s not just new pictures that will become recognizable either, but the entire history of digitized images and video footage. This will massively change how these assets are located and shared online. For example, YouTube might soon intelligently find content related to parts of a clip you watched and liked based only on the visual content of the video itself. The resulting efficiencies in both our work and personal time will be profound.
    Healthcare: Machine learning’s ability to analyze and store massive amounts of data should provide physicians with much-needed second opinions and lead to the detection and treatment of medical ailments on a mass scale. Packaged as smart, wearable computing devices, personal health monitors that detect various conditions as they arise should become widespread in the next five years, in a similar fashion to activity trackers like Fitbit. The advancements here could significantly accelerate our human desire to protect our own longevity and create major breakthroughs for the operations of the medical industry.
    Travel & Communication: By 2020, real-time translation technology may be fully accessible. We’ll see everything from an app on your phone that instantly translates foreign signs and texts to phone conversations that are immediately converted to a listener’s native language, without speakers even knowing the difference. As globalization booms, the language lines will soon be crossed. Business, in particular, stands to benefit enormously from the advancement here, with tech giants such as Google and Microsoft already taking the necessary steps to build such tools, making the need for a premium multilingual workforce obsolete.
    Advertising: Based on recent ML advancements, in just a few short years augmented reality technology should become the commonplace method for integrated branding. This will allow advertisers to seamlessly place products into existing content by properly identifying the depth, relative size, lighting, and shading of the product in comparison to the setting. This essentially makes any historical video property available for integration. The computer vision technology firm Mirriad has already been heralded (and won an Oscar) for its advancements in the field. Looking at online video, as companies continue to try and tap into hugely popular amateur content, this technology will revolutionize their capabilities.
    So while we have already seen enormous advancements in the fields above of late, a full-scale commercialization of machine learning technologies could be seen as soon as 2020. While I’ve only listed a few predictions above, almost all sectors of the economy stand to benefit enormously from the efficiencies of this new era of machine learning. We are already seeing a swell in consumer demand in experiences that require ML at their core, and the examples above only touch the surface of what is possible. If things continue on the trajectory we expect, the golden age of machine learning might very well make the next five years in technology the most exciting yet.

    0 0

    Two quantum properties teleported together for first time

    The values of two inherent properties of one photon – its spin and its orbital angular momentum – have been transferred via quantum teleportation onto another photon for the first time by physicists in China. Previous experiments have managed to teleport a single property, but scaling that up to two properties proved to be a difficult task, which has only now been achieved. The team's work is a crucial step forward in improving our understanding of the fundamentals of quantum mechanics and the result could also play an important role in the development of quantum communications and quantum computers.

    Alice and Bob

    Quantum teleportation first appeared in the early 1990s after four researchers, including Charles Bennett of IBM in New York, developed a basic quantum teleportation protocol. To successfully teleport a quantum state, you must make a precise initial measurement of a system, transmit the measurement information to a receiving destination and then reconstruct a perfect copy of the original state. The "no-cloning" theorem of quantum mechanics dictates that it is impossible to make a perfect copy of a quantum particle. But researchers found a way around this via teleportation, which allows a flawless copy of a property of a particle to be made. This occurs thanks to what is ultimately a complete transfer (rather than an actual copy) of the property onto another particle such that the first particle loses all of the properties that are teleported.
    The protocol has an observer, Alice, send information about an unknown quantum state (or property) to another observer, Bob, via the exchange of classical information. Both Alice and Bob are first given one half of an additional pair of entangled particles that act as the "quantum channel" via which the teleportation will ultimately take place. Alice would then interact the unknown quantum state with her half of the entangled particle, measure the combined quantum state and send the result through a classical channel to Bob. The act of the measurement itself alters the state of Bob's half of the entangled pair and this, combined with the result of Alice's measurement, allows Bob to reconstruct the unknown quantum state. The first experimentation teleportation of the spin (or polarization) of a photon took place in 1997. Since then, the states of atomic spins, coherent light fields, nuclear spins and trapped ions have all been teleported.
    But any quantum particle has more than one given state or property – they possess various "degrees of freedom", many of which are related. Even the simple photon has various properties such as frequency, momentum, spin and orbital angular momentum (OAM), which are inherently linked.

    More than one

    Teleporting more than one state simultaneously is essential to fully describe a quantum particle and achieving this would be a tentative step towards teleporting something larger than a quantum particle, which could be very useful in the exchange of quantum information. Now, Chaoyang Lu and Jian-Wei Pan, along with colleagues at the University of Science and Technology of China in Hefei, have taken the first step in simultaneously teleporting multiple properties of a single photon.
    In the experiment, the team teleports the composite quantum states of a single photon encoded in both its spin and OAM. To transfer the two properties requires not only an extra entangled set of particles (the quantum channel), but a "hyper-entangled" set – where the two particles are simultaneously entangled in both their spin and their OAM. The researchers shine a strong ultraviolet pulsed laser on three nonlinear crystals to generate three entangled pairs of photons – one pair is hyper-entangled and is used as the "quantum channel", a second entangled pair is used to carry out an intermediate "non-destructive" measurement, while the third pair is used to prepare the two-property state of a single photon that will eventually be teleported.

    This schematic shows exactly how the polarization and the OAM was teleported via the comparative measurements and an intermediate non-destructive step. (Courtesy: Nature 518516/Wang et al.)
    The image above represents Pan's double-teleportation protocol – A is the single photon whose spin and OAM will eventually be teleported to C (one half of the hyper-entangled quantum channel). This occurs via the other particle in the channel B. As B and C are hyper-entangled, we know that their spin and OAM are strongly correlated, but we do not actually know what their values are – i.e. whether they are horizontally, vertically or orthogonally polarized. 
    So to actually transfer A's polarization and OAM onto C, the researchers make a "comparative measurements" (referred to as CM-P and CM-OAM in the image) with B. In other words, instead of revealing B's properties, they detect how A's polarization and OAM differ from B. If the difference is zero, we can tell that A and B have the same polarization or OAM, and since B and C are correlated, that C now has the same properties that A had before the comparison measurement.
    On the other hand, if the comparative measurement showed that A's polarization as compared with B differed by 90° (i.e. A and B are orthogonally polarized), then we would rotate C's field by 90° with respect to that of A to make a perfect transfer once more. Simply put, making two comparative measurements, followed by a well-defined rotation of the still-unknown polarization or OAM, would allow us to teleport A's properties to C.

    Perfect protocol

    One of the most challenging steps for the researchers was to link together the two comparative measurements. Referring to the "joint measurements" box in the image above, we begin with the comparative measurement of A and B's polarization (CM-P). From here, either one of three scenarios can take place – one photon travels along path 1 to the middle box (labelled "non-destructive photon-number measurement"); no photons enter the middle box along path 1; or two single photons enter the middle box along path 1.
    The middle box itself contains the second set of entangled photons mentioned previously (not shown in figure) and one of these two entangled photons is jointly measured with the incoming photons from path 1. But the researcher's condition is that if either no photons or two photons enter the middle box via path 1, then the measurement would fail. Indeed, what the middle box ultimately shows is that exactly one photon existed in path 1, and so exactly one photon existed in path 2, given that two photons (A and B) entered CM-P. To show that indeed one photon existed in path two required the third and final set of entangled photons in the CP-OAM box (not shown), where the OAM's of A and B undergo a comparative measurement.
    The measurements ultimately result in the transfer or teleportation of A's properties onto C – although it may require rotating C's (as yet unknown) polarization and OAM depending on the outcomes of the comparative measurements, but the researchers did not actually implement the rotations in their current experiment. The team's work has been published in the journal Nature this week. Pan that the team verified that "the teleportation works for both spin-orbit product state and hybrid entangled state, achieving an overall fidelity that well exceeds the classical limit". He says that these "methods can, in principle, be generalized to more [properties], for instance, involving the photon's momentum, time and frequency".

    Verification verdicts

    Physicist Wolfgang Tittel from the University of Calgary, who was not involved in the current work (but wrote an accompanying "News and Views" article in Nature) explains that the team verified that the teleportation had indeed occurred by measuring the properties of C after the teleportation. "Of course, the no-cloning theorem does not allow them to do this perfectly. But it is possible to repeat the teleportation of the properties of photon A, prepared every time in the same way, many times. Making measurements on photon C (one per repetition) allows reconstructing its properties." He points out that although the rotations were not ultimately implemented by the researchers, they found that "the properties of C differed from those of A almost exactly by the amount predicted by the outcomes of the comparative measurements. They repeated this large number of measurements for different preparations of A, always finding the properties of C close to those expected. This suffices to claim quantum teleportation".
    While it is technically possible to extend Pan's method to teleport more than two properties simultaneously, this is increasingly difficult because the probability of a successful comparative measurement decreases with each added property. "I think with the scheme demonstrated by [the researchers], the limit is three properties. But this does not mean that other approaches, either other schemes based on photons, or approaches using other particles (e.g. trapped ions), can't do better," says Tittel.
    Pan says that to teleport three properties, their scheme "needs the experimental ability to control 10 photons. So far, our record is eight photon entanglement. We are currently working on two parallel lines to get more photon entanglement." Indeed, he says that the team's next goal is to experimentally create "the largest hyper-entangled state so far: a six-photon 18-qubit Schrödinger cat state, entangled in three degrees-of-freedom, polarization, orbital angular momentum, and spatial mode. To do this would provide us with an advanced platform for quantum communication and computation protocols".

    0 0

    New Ikea Furniture Will Charge Your Phone Without Wires

    Ikea are launching their first furniture range which can wirelessly charge phones and mobile devices, meaning charging cables could soon be a thing of the past.
    The range will integrate Qi wireless technology into special charging pads in the furniture, and users can then leave their devices on or around these pads to boost their batteries.
    Qi is the most widely used wireless power standard, with more than 80 smartphones carrying built-in technology. The technology works by embedding magnetic coils into furniture which then generate an electromagnetic field, which Qi-friendly devices can convert into energy.
    The Swedish firm will launch their new Home Smart range across Europe and North America in April this year. It will include desks, bedside tables and lamps fitted with integrated charging pads.

    Jeanette Skjelmose, Ikea’s business manager of lighting and wireless charging, said: “Our new innovative solutions, which integrate wireless charging into home furnishings, will make life at home simpler.”
    However, experts have pointed out that the furniture will only work with devices compatible with Qi, which is powered by the Wireless Power Consortium. Other providers of wireless power include Power Matters Alliance, whose technology is used by global firms including Starbucks and McDonald’s. Ikea plan to sell charging covers for incompatible models, including Apple’s iPhone range and some Samsung models.
    Environmental group Friends of the Earth have spoken out about the ecological costs of such technology, which is difficult to separate out from the furniture at the end of its life. Campaigner Julian Kirby told the BBC: “A key principle that manufacturers of furniture with built-in wireless charging technology should consider is that the furniture is designed to be easy to disassemble for upgrade, reuse, repair or recycling.”
    Wireless charging also generates excess heat which can damage smartphone batteries, according to Gizmodo.

    0 0

    Why Strategy Execution Unravels—and What to Do About It

    Since Michael Porter’s seminal work in the 1980s we have had a clear and widely accepted definition of what strategy is—but we know a lot less about translating a strategy into results. Books and articles on strategy outnumber those on execution by an order of magnitude. And what little has been written on execution tends to focus on tactics or generalize from a single case. So what do we know about strategy execution?

    We know that it matters. A recent survey of more than 400 global CEOs found that executional excellence was the number one challenge facing corporate leaders in Asia, Europe, and the United States, heading a list of some 80 issues, including innovation, geopolitical instability, and top-line growth. We also know that execution is difficult. Studies have found that two-thirds to three-quarters of large organizations struggle to implement t to understand how complex organizations can execute their strategies more effectively.

     The research includes more than 40 experiments in which we made changes in companies and measured the impact on execution, along with a survey administered to nearly 8,000 managers in more than 250 companies. The study is ongoing but has already produced valuable insights. The most important one is this: Several widely held beliefs about how to implement strategy are just plain wrong. In this article we debunk five of the most pernicious myths and replace them with a more accurate perspective that will help managers effectively execute strategy.

    Myth 1: Execution Equals Alignment

    Over the past few years we have asked managers from hundreds of companies, before they take our survey, to describe how strategy is executed in their firms. Their accounts paint a remarkably consistent picture. The steps typically consist of translating strategy into objectives, cascading those objectives down the hierarchy, measuring progress, and rewarding performance. 

    When asked how they would improve execution, the executives cite tools, such as management by objectives and the balanced scorecard, that are designed to increase alignment between activities and strategy up and down the chain of command. In the managers’ minds, execution equals alignment, so a failure to execute implies a breakdown in the processes to link strategy to action at every level in the organization.

    Despite such perceptions, it turns out that in the vast majority of companies we have studied, those processes are sound. Research on strategic alignment began in the 1950s with Peter Drucker’s work on management by objectives, and by now we know a lot about achieving alignment. Our research shows that best practices are well established in today’s companies. More than 80% of managers say that their goals are limited in number, specific, and measurable and that they have the funds needed to achieve them. If most companies are doing everything right in terms of alignment, why are they struggling to execute their strategies?

    To find out, we ask survey respondents how frequently they can count on others to deliver on promises—a reliable measure of whether things in an organization get done (see“Promise-Based Management: The Essence of Execution,” by Donald N. Sull and Charles Spinosa, HBR, April 2007). Fully 84% of managers say they can rely on their boss and their direct reports all or most of the time—a finding that would make Drucker proud but sheds little light on why execution fails. When we ask about commitments across functions and business units, the answer becomes clear. Only 9% of managers say they can rely on colleagues in other functions and units all the time, and just half say they can rely on them most of the time. Commitments from these colleagues are typically not much more reliable than promises made by external partners, such as distributors and suppliers.

    When managers cannot rely on colleagues in other functions and units, they compensate with a host of dysfunctional behaviors that undermine execution: They duplicate effort, let promises to customers slip, delay their deliverables, or pass up attractive opportunities. The failure to coordinate also leads to conflicts between functions and units, and these are handled badly two times out of three—resolved after a significant delay (38% of the time), resolved quickly but poorly (14%), or simply left to fester (12%).

    Even though, as we’ve seen, managers typically equate execution with alignment, they do recognize the importance of coordination when questioned about it directly. When asked to identify the single greatest challenge to executing their company’s strategy, 30% cite failure to coordinate across units, making that a close second to failure to align (40%). Managers also say they are three times more likely to miss performance commitments because of insufficient support from other units than because of their own teams’ failure to deliver.

    Whereas companies have effective processes for cascading goals downward in the organization, their systems for managing horizontal performance commitments lack teeth. More than 80% of the companies we have studied have at least one formal system for managing commitments across silos, including cross-functional committees, service-level agreements, and centralized project-management offices—but only 20% of managers believe that these systems work well all or most of the time. More than half want more structure in the processes to coordinate activities across units—twice the number who want more structure in the management-by-objectives system.

    Myth 2: Execution Means Sticking to the Plan

    When crafting strategy, many executives create detailed road maps that specify who should do what, by when, and with what resources. The strategic-planning process has received more than its share of criticism, but, along with the budgeting process, it remains the backbone of execution in many organizations. Bain & Company, which regularly surveys large corporations around the world about their use of management tools, finds that strategic planning consistently heads the list. After investing enormous amounts of time and energy formulating a plan and its associated budget, executives view deviations as a lack of discipline that undercuts execution.

    Unfortunately, no Gantt chart survives contact with reality. No plan can anticipate every event that might help or hinder a company trying to achieve its strategic objectives. Managers and employees at every level need to adapt to facts on the ground, surmount unexpected obstacles, and take advantage of fleeting opportunities. Strategy execution, as we define the term, consists of seizing opportunities that support the strategy while coordinating with other parts of the organization on an ongoing basis. When managers come up with creative solutions to unforeseen problems or run with unexpected opportunities, they are not undermining systematic implementation; they are demonstrating execution at its best.

    Such real-time adjustments require firms to be agile. Yet a lack of agility is a major obstacle to effective execution among the companies we have studied. When asked to name the greatest challenge their companies will face in executing strategy over the next few years, nearly one-third of managers cite difficulties adapting to changing market circumstances. It’s not that companies fail to adapt at all: Only one manager in 10 saw that as the problem. But most organizations either react so slowly that they can’t seize fleeting opportunities or mitigate emerging threats (29%), or react quickly but lose sight of company strategy (24%). Just as managers want more structure in the processes to support coordination, they crave more structure in the processes used to adapt to changing circumstances.

    A seemingly easy solution would be to do a better job of resource allocation. Although resource allocation is unquestionably critical to execution, the term itself is misleading. In volatile markets, the allotment of funds, people, and managerial attention is not a onetime decision; it requires ongoing adjustment. According to a study by McKinsey, firms that actively reallocatedcapital expenditures across business units achieved an average shareholder return 30% higher than the average return of companies that were slow to shift funds.

    Instead of focusing on resource allocation, with its connotation of one-off choices, managers should concentrate on the fluid reallocation of funds, people, and attention. We have noticed a pattern among the companies in our sample: Resources are often trapped in unproductive uses. Fewer than one-third of managers believe that their organizations reallocate funds to the right places quickly enough to be effective. The reallocation of people is even worse. Only 20% of managers say their organizations do a good job of shifting people across units to support strategic priorities. The rest report that their companies rarely shift people across units (47%) or else make shifts in ways that disrupt other units (33%).

    Companies also struggle to disinvest. Eight in 10 managers say their companies fail to exit declining businesses or to kill unsuccessful initiatives quickly enough. Failure to exit undermines execution in an obvious way, by wasting resources that could be redeployed. Slow exits impede execution in more-insidious ways as well: Top executives devote a disproportionate amount of time and attention to businesses with limited upside and send in talented managers who often burn themselves out trying to save businesses that should have been shut down or sold years earlier. The longer top executives drag their feet, the more likely they are to lose the confidence of their middle managers, whose ongoing support is critical for execution.

    A word of warning: Managers should not invoke agility as an excuse to chase every opportunity that crosses their path. Many companies in our sample lack strategic discipline when deciding which new opportunities to pursue. Half the middle managers we have surveyed believe that they could secure significant resources to pursue attractive opportunities that fall outside their strategic objectives. This may sound like good news for any individual manager, but it spells trouble for a company as a whole, leading to the pursuit of more initiatives than resources can support. Only 11% of the managers we have surveyed believe that all their company’s strategic priorities have the financial and human resources needed for success. That’s a shocking statistic: It means that nine managers in 10 expect some of their organizations’ major initiatives to fail for lack of resources. Unless managers screen opportunities against company strategy, they will waste time and effort on peripheral initiatives and deprive the most promising ones of the resources they need to win big. Agility is critical to execution, but it must fit within strategic boundaries. In other words, agility must be balanced with alignment.

    Myth 3: Communication Equals Understanding

    Many executives believe that relentlessly communicating strategy is a key to success. The CEO of one London-based professional services firm met with her management team the first week of every month and began each meeting by reciting the firm’s strategy and its key priorities for the year. She was delighted when an employee engagement survey (not ours) revealed that 84% of all staff members agreed with the statement “I am clear on our organization’s top priorities.” Her efforts seemed to be paying off.

    Then her management team took our survey, which asks members to describe the firm’s strategy in their own words and to list the top five strategic priorities. Fewer than one-third could name even two. The CEO was dismayed—after all, she discussed those objectives in every management meeting. Unfortunately, she is not alone. Only 55% of the middle managers we have surveyed can name even one of their company’s top five priorities. In other words, when the leaders charged with explaining strategy to the troops are given five chances to list their company’s strategic objectives, nearly half fail to get even one right.

    Not only are strategic objectives poorly understood, but they often seem unrelated to one another and disconnected from the overall strategy. Just over half of all top team members say they have a clear sense of how major priorities and initiatives fit together. It’s pretty dire when half the C-suite cannot connect the dots between strategic priorities, but matters are even worse elsewhere. Fewer than one-third of senior executives’ direct reports clearly understand the connections between corporate priorities, and the share plummets to 16% for frontline supervisors and team leaders.

    Senior executives are often shocked to see how poorly their company’s strategy is understood throughout the organization. In their view, they invest huge amounts of time communicating strategy, in an unending stream of e-mails, management meetings, and town hall discussions. But the amount of communication is not the issue: Nearly 90% of middle managers believe that top leaders communicate the strategy frequently enough. How can so much communication yield so little understanding?

    Part of the problem is that executives measure communication in terms of inputs (the number of e-mails sent or town halls hosted) rather than by the only metric that actually counts—how well key leaders understand what’s communicated. A related problem occurs when executives dilute their core messages with peripheral considerations. 

    The executives at one tech company, for example, went to great pains to present their company’s strategy and objectives at the annual executive off-site. But they also introduced 11 corporate priorities (which were different from the strategic objectives), a list of core competencies (including one with nine templates), a set of corporate values, and a dictionary of 21 new strategic terms to be mastered. Not surprisingly, the assembled managers were baffled about what mattered most. 

    When asked about obstacles to understanding the strategy, middle managers are four times more likely to cite a large number of corporate priorities and strategic initiatives than to mention a lack of clarity in communication. Top executives add to the confusion when they change their messages frequently—a problem flagged by nearly one-quarter of middle managers.

    Myth 4: A Performance Culture Drives Execution

    When their companies fail to translate strategy into results, many executives point to a weak performance culture as the root cause. The data tells a different story. It’s true that in most companies, the official culture—the core values posted on the company website, say—does not support execution. However, a company’s true values reveal themselves when managers make hard choices—and here we have found that a focus on performance does shape behavior on a day-to-day basis.

    Few choices are tougher than personnel decisions. When we ask about factors that influence who gets hired, praised, promoted, and fired, we see that most companies do a good job of recognizing and rewarding performance. Past performance is by far the most frequently named factor in promotion decisions, cited by two-thirds of all managers. Although harder to assess when bringing in new employees, it ranks among the top three influences on who gets hired. 

    One-third of managers believe that performance is also recognized all or most of the time with nonfinancial rewards, such as private praise, public acknowledgment, and access to training opportunities. To be sure, there is room for improvement, particularly when it comes to dealing with underperformers: A majority of the companies we have studied delay action (33%), address underperformance inconsistently (34%), or tolerate poor performance (11%). Overall, though, the companies in our sample have robust performance cultures—and yet they struggle to execute strategy. Why?

    The answer is that a culture that supports execution must recognize and reward other things as well, such as agility, teamwork, and ambition. Many companies fall short in this respect. When making hiring or promotion decisions, for example, they place much less value on a manager’s ability to adapt to changing circumstances—an indication of the agility needed to execute strategy—than on whether she has hit her numbers in the past. Agility requires a willingness to experiment, and many managers avoid experimentation because they fear the consequences of failure.

     Half the managers we have surveyed believe that their careers would suffer if they pursued but failed at novel opportunities or innovations. Trying new things inevitably entails setbacks, and honestly discussing the challenges involved increases the odds of long-term success. But corporate cultures rarely support the candid discussions necessary for agility. Fewer than one-third of managers say they can have open and honest discussions about the most difficult issues, while one-third say that many important issues are considered taboo.

    An excessive emphasis on performance can impair execution in another subtle but important way. If managers believe that hitting their numbers trumps all else, they tend to make conservative performance commitments. When asked what advice they would give to a new colleague, two-thirds say they would recommend making commitments that the colleague could be sure to meet; fewer than one-third would recommend stretching for ambitious goals. This tendency to play it safe may lead managers to favor surefire cost reductions over risky growth, for instance, or to milk an existing business rather than experiment with a new business model.

    The most pressing problem with many corporate cultures, however, is that they fail to foster the coordination that, as we’ve discussed, is essential to execution. Companies consistently get this wrong. When it comes to hires, promotions, and nonfinancial recognition, past performance is two or three times more likely than a track record of collaboration to be rewarded. Performance is critical, of course, but if it comes at the expense of coordination, it can undermine execution. We ask respondents what would happen to a manager in their organization who achieved his objectives but failed to collaborate with colleagues in other units. Only 20% believe the behavior would be addressed promptly; 60% believe it would be addressed inconsistently or after a delay, and 20% believe it would be tolerated.

    Myth 5: Execution Should Be Driven from the Top

    In his best-selling book Execution, Larry Bossidy describes how, as the CEO of AlliedSignal, he personally negotiated performance objectives with managers several levels below him and monitored their progress. Accounts like this reinforce the common image of a heroic CEO perched atop the org chart, driving execution. That approach can work—for a while. AlliedSignal’s stock outperformed the market under Bossidy’s leadership. However, as Bossidy writes, shortly after he retired “the discipline of execution…unraveled,” and the company gave up its gains relative to the S&P 500.

    Top-down execution has drawbacks in addition to the risk of unraveling after the departure of a strong CEO. To understand why, it helps to remember that effective execution in large, complex organizations emerges from countless decisions and actions at all levels. Many of those involve hard trade-offs: For example, synching up with colleagues in another unit can slow down a team that’s trying to seize a fleeting opportunity, and screening customer requests against strategy often means turning away lucrative business. The leaders who are closest to the situation and can respond most quickly are best positioned to make the tough calls.

    Concentrating power at the top may boost performance in the short term, but it degrades an organization’s capacity to execute over the long run. Frequent and direct intervention from on high encourages middle managers to escalate conflicts rather than resolve them, and over time they lose the ability to work things out with colleagues in other units. Moreover, if top executives insist on making the important calls themselves, they diminish middle managers’ decision-making skills, initiative, and ownership of results.

    In large, complex organizations, execution lives and dies with a group we call “distributed leaders,” which includes not only middle managers who run critical businesses and functions but also technical and domain experts who occupy key spots in the informal networks that get things done. The vast majority of these leaders try to do the right thing. Eight out of 10 in our sample say they are committed to doing their best to execute the strategy, even when they would like more clarity on what the strategy is.

    Distributed leaders, not senior executives, represent “management” to most employees, partners, and customers. Their day-to-day actions, particularly how they handle difficult decisions and what behaviors they tolerate, go a long way toward supporting or undermining the corporate culture. In this regard, most distributed leaders shine. As assessed by their direct reports, more than 90% of middle managers live up to the organization’s values all or most of the time. They do an especially good job of reinforcing performance, with nearly nine in 10 consistently holding team members accountable for results.

    But although execution should be driven from the middle, it needs to be guided from the top. And our data suggests that many top executive teams could provide much more support. Distributed leaders are hamstrung in their efforts to translate overall company strategy into terms meaningful for their teams or units when top executives fail to ensure that they clearly understand that strategy. And as we’ve seen, such failure is not the exception but the rule.

    Conflicts inevitably arise in any organization where different units pursue their own objectives. Distributed leaders are asked to shoulder much of the burden of working across silos, and many appear to be buckling under the load. A minority of middle managers consistently anticipate and avoid problems (15%) or resolve conflicts quickly and well (26%). Most resolve issues only after a significant delay (37%), try but fail to resolve them (10%), or don’t address them at all (12%). Top executives could help by adding structured processes to facilitate coordination. In many cases they could also do a better job of modeling teamwork. One-third of distributed leaders believe that factions exist within the C-suite and that executives there focus on their own agendas rather than on what is best for the company.

    Many executives try to solve the problem of execution by reducing it to a single dimension. They focus on tightening alignment up and down the chain of command—by improving existing processes, such as strategic planning and performance management, or adopting new tools, such as the balanced scorecard. These are useful measures, to be sure, but relying on them as the sole means of driving execution ignores the need for coordination and agility in volatile markets. If managers focus too narrowly on improving alignment, they risk developing ever more refined answers to the wrong question.

    In the worst cases, companies slip into a dynamic we call the alignment trap. When execution stalls, managers respond by tightening the screws on alignment—tracking more performance metrics, for example, or demanding more-frequent meetings to monitor progress and recommend what to do. This kind of top-down scrutiny often deteriorates into micromanagement, which stifles the experimentation required for agility and the peer-to-peer interactions that drive coordination. Seeing execution suffer but not knowing why, managers turn once more to the tool they know best and further tighten alignment. The end result: Companies are trapped in a downward spiral in which more alignment leads to worse results.

    If common beliefs about execution are incomplete at best and dangerous at worst, what should take their place? The starting point is a fundamental redefinition of execution as the ability to seize opportunities aligned with strategy while coordinating with other parts of the organization on an ongoing basis. Reframing execution in those terms can help managers pinpoint why it is stalling. Armed with a more comprehensive understanding, they can avoid pitfalls such as the alignment trap and focus on the factors that matter most for translating strategy into results.

    0 0

    How Leaders value quality of Life in the Organization.

    This study was quite successful in terms of response rate of participants. What does this say about how leaders view Quality of Life?
    Isabelle Panhard: It was challenging to reach out exclusively to top-level leaders – in the corporate segment this meant reaching out to the C-suites of companies with more than 1,000 employees. But the interest leaders took in the topic of Quality of Life helped us succeed. A key mark of interest is that 82 percent of leaders asked to receive the survey results – usually this rate is around 50 percent in surveys we conduct.
    Thomas Jelley: Quality of Life tends to be one of those subjects on which everyone has a point of view. We asked leaders to consider Quality of Life – not at large, but in the specific context of their organization. By guiding the conversation in this way, we didn’t get stuck in subjective or abstract notions. On the contrary, we went through quite concrete measures and I think that this approach really helped leaders to make the link between Quality of Life and performance as they were interviewed.
    What else was unique about the approach of this study?
    I.P.: For me, our approach by country and sector produced surprising results. For example, leaders in Brazil and India were more concerned regarding Quality of Life in their organizations than their counterparts in the UK or France. So we see that it is more a question of cultural differences rather than a country’s level of development. But what I found even more interesting is the difference between the three environments that we surveyed. We can see that university – and above all, healthcare leaders – are particularly engaged in Quality of Life and more convinced of its importance. Specifically, 77 percent of healthcare and 75 percent of university leaders viewed Quality of Life as a strategic investment, compared to 56 percent of corporate leaders.
    Many leaders identified social interaction as a key component of Quality of Life. Can you explain why?
    T.J.: It’s very consistent with what we’ve been seeing at the Institute. In this dynamic environment, where the work is less about place and time – as technology allows us to work any place and any time – employees are facing important questions about organizational culture. How can we create a sense of belonging? How can we make sure that we still have the glue that brings people together around a common purpose? Technology affords us many ways of interaction but there is still nothing quite like face-to-face interaction.
    I.P.: 74 percent of leaders declared they implement social interaction initiatives in their organizations. For them, this appears to be a key dimension. Office coffee breaks or overnight accommodations for families in hospitals –these factors help to strengthen bonds between individuals and contribute to their Quality of Life.
    Do you find that any of these findings begs further investigation? 
    T.J.: This gives us new areas to look into further in the future.
    I.P.: We found that there is often a gap between the strong conviction of the importance of Quality of Life and the current internal structure in organizations. Many leaders told us that there is no dedicated budget, function or program. In other words, they are convinced but they haven’t structured anything yet. But things are progressing; I think that in 10 years, there will be a Quality of Life director in every organization.

    0 0

    Stop Thinking Long  Term. Execute Strategy 90 Days a Time.

    Most savvy executives fully understand the value and necessity of doing strategic thinking and planning. After all, the saying goes, “Without a map every direction looks good." 
    There are many well-known planning tools for businesses to use as guides, such as Jim Collins and Jerry Porras’ Big Hairy Audacious Goal, a blueprint for helping enterprises hone in on an objectiveor Michael Porter’s Five Factor Analysis, which offers a model for companies to determine long-term viability by analyzing competitive advantage.
    But while planning tools are very appropriate for defining the current state of a company as well as its desired future state, they rarely include a process for getting there.
    Envisioning where your company will be in the future is important but the companies that are really great at executing their long-term vision do it 90 days at a time, focusing on bite-size pieces of progress that everyone in the company can understand and work toward collectively.
    I have met with leaders at companies whose planning involves one-year cycles for goals. Usually, they're not very satisfied with the progress they have made. When I ask them how many of those 40- to 60 one-year initiatives they accomplished, the answer ends up being six to 10.  It’s almost always a problem of too many, too big goals with too little ownership.
    The huge volume of goals that most executives attempt to undertake amazes me. I met a senior executive recently who was complaining about how many things he had to get done. When I asked him the number of items he had on his “to-do” list, he opened his smartphone, checked something and replied “54.”
    Who in the world could ever mentally, or in practice, deal with that many priorities? The same thing applies to companies setting goals.
    Here’s what to do. Routinely review the line items in your company's strategy plan or to-do list and prioritize the three to five that will have the greatest impact for the next 90 days. Identify and then work on the tactical bite-size pieces that have specific deliverables that can be measured.
    If you set one-to three-year goals and then every quarter identify and prioritize the most significant three to five tactical actions, two things typically happen: Members of the leadership team start aligning themselves around the highest priorities and the company actually begins to achieve the goals that are most critical for the strategic plan.
    An added benefit of setting 90-day bite-size goals is that it's easier to communicate to employees what the company’s most important objectives are.
    Now, don’t expect every employee to be able to recite the strategic plan because that isn’t realistic.
    But if you tell your people what's important for the company to accomplish in the next 90 days, ask every employee to know, “What can you do in your job in the next 90 days to support the company’s goals?”
     Getting your people aligned around clear, understandable, short-term goals is what great execution of a strategy is all about.
    So, while having a strategic plan is critical, leaders of great companies know that coming up with a bunch of one-year initiatives is a waste of time, especially if there isn’t a process articulated for achieving those goals. 
    Does your company have 90-days goals that employees are aware of and that the leadership team will focus on? Do employees have individual tasks to work on in support of these goals?

    0 0

    Why Our Brains Like Short-Term Goals

    In her book, The Entrepreneurial Instinct: How Everyone Has the Innate Ability to Start a Successful Business, author Monica Mehta explores the role of brain chemistry in entrepreneurship. In this excerpt, she details goal setting.
    Achieving your goals isn’t just about hard work and discipline. It’s about physiology. By understanding how the brain processes success and failure, you can jump-start your productivity to create a winning streak and put an end to failed New Year's resolutions.
    The more times you succeed at something, the longer your brain stores the information that allowed you to do so well in the first place. That’s because with each success, our brain releases a chemical called dopamine. When dopamine flows into the brain's reward pathway (the part responsible for pleasure, learning and motivation), we not only feel greater concentration but are inspired to re-experience the activity that caused the chemical release in the first place.
    This is why the cultivation of small wins can propel you to bigger success, and you should focus on setting just a few small achievable goals. While your ambitions can remain grand, setting the bar too high with goals can actually be counterproductive. Each time we fail, the brain is drained of dopamine making it not only hard to concentrate but also difficult to learn from what went wrong.

    Why We Learn More From Success Than Failure
    Ever find yourself destined to repeat the same mistakes over and over again? According to a study completed by researchers at MIT’s Picower Institute for Learning and Memory, that is exactly how our brains are wired to work. Their findings determined that our brain cells only learn from experience when we do things right and failure doesn’t register the same way.
    In the experiment, monkeys viewed two images on a computer screen, one that presented a reward if the subject reacted by looking right, another when it looked left. The study showed that the brain response when a monkey received an award for looking the right way improved its chances of performing well on the next trial.
    The study makes important discoveries not only about the way we learn but the brain’s neural plasticity or ability to change in response to experiences. When behavior is successful our cells become finely tuned to what the animal was learning at the time while a failure shows little change in the brain or improvement in the monkey’s behavior.
    Set Goals Your Brain Likes
    Collecting wins, no matter how small, can chemically wire you to move mountains by causing a repeated release of dopamine. But to get going you have to land those first few successes. The key to creating your own cycle of productivity is to set a grand vision and work your way there with a few, achievable goals that increase your likelihood of experiencing a positive outcome.
    “Your vision is your destination, and small, manageable goals are the motor that will get you there,” says Dr. Frank Murtha, a New York-based counseling psychologist with a focus on investor psychology, behavioral finance and financial risk taking. “Without the vision you’re on a road to nowhere. Without the goals, you have a destination but no motor. They work in tandem, and you need both.”
    Create a Road Map for Your Subconscious Mind
    Kick off goal setting by preparing a short vision statement of where you want to go. “Vision creates a picture for the subconscious mind. Our subconscious is what makes us such good problem solvers compared to a computer,” says Dr. Richard Peterson, a psychiatrist and neuroeconomics researcher who has written two books on financial risk taking. “We can see 1,000 dimensions of a problem and sort it down to the most important very quickly.”
    The subconscious is not only responsible for 90 percent of the decisions we make in day-to-day life, but is also the part of the brain that is largely in charge when we are performing creative tasks or charting unknown territory. The very act of giving your emotional brain a detailed portrait of your end goal also ensures that, even inadvertently, you will take the steps needed to steer yourself toward it.
    Articulate your vision with words and a picture or two; the more detailed the better. Post this where you can see it regularly.
    Work Your Way There With Short-Term Goals
    To rack up those first few wins, you’ve got to set only a few short-term goals at a time. Each should ideally take no more than three months to achieve. The goals should be realistic and specific, and incorporate your strengths. Writing them down, ideally in a place where you will see them every day, will help you stay focused.
    If success releases the production of dopamine, failure can do the opposite. Setting over-reaching goals, or too many goals at once, can be counterproductive for those seeking to harness the power of the brain’s reward center. If you set four goals and achieve only two of them, it’s human nature to focus on what went wrong; even the successes you were able to accomplish fail to drum out what you weren’t able to achieve.
    Remember, success begets success.

    0 0

    Twelve Common Strategy Execution Mistakes - and What You Can Do to Avoid Them

    The Balanced Scorecard is among the most widely-used management systems today. As with any framework or tool, its popularity is a double-edged sword: with more and more organizations implementing the Balanced Scorecard, more and more will screw it up.

    When done well, the Balanced Scorecard can be a game-changing management tool; when done poorly, it is quickly sidelined and becomes just another flavor of the week. Perhaps every unhappy family is unhappy in its own way (our apologies to Tolstoy), but unhappy organizations tend to share a common profile – at least where strategy execution is concerned. These twelve mistakes are the most common culprits. 

    1. Delegating too low 

    The Balanced Scorecard is a deceptively simple concept, and as a result, responsibility for its implementation often falls to a relatively junior resource. This faulty assumption – that a straightforward framework makes for a straightforward process – leads to a host of problems. As a rule, the success of this process relies on the explicit, not merely complicit, support of senior leadership. 

    Though junior resources can and usually should perform much of the legwork, the spokesperson for implementation ought to be someone whose position in the organization commands respect, which in turn sets the expectation that the Balanced Scorecard effort should be taken seriously. Junior resources typically lack the experience and institutional credibility necessary to guide the discussions that will ultimately shape the organization’s objectives. A more experienced employee with the requisite insight and social capital will have far greater success in eliciting insightful opinions and establishing buy-in from the larger organization.2 © 2015 Palladium Group, Inc. | 

    Organizations can most effectively utilize junior resources in a supportive role, but with ultimate responsibility for the success of the project assigned to a senior leader. The junior resource can do most of the work involved in creating deliverables, but always with the understanding that they are acting on the behalf of the senior leader, who in turn will be the public face of the implementation and will step in to aid the junior resource in areas where they lack expertise. The junior resource gets a tremendous opportunity to increase their visibility within the organization, and the senior leader is not tasked with time-intensive aspects of the implementation process. 

    2. Ignoring political realities 

    Even the most easy-going organizations are not free of office politics, and to pretend otherwise is to be willfully na- ïve. Especially at the outset, the implementation of the Balanced Scorecard can bring these politics to the forefront. Politically savvy organizations will recognize the Balanced Scorecard as a neutral ground that encourages transparency and gives voice to the entire organization, but without careful consideration it can easily devolve into a new arena on which to fight the same old battles. 

    Most organizations begin their implementation process with a series of interviews and workshops to build their Strategy Map and Balanced Scorecard. The most successful organizations will take particular care in selecting their facilitator. This person needs to be impartial – and just as importantly, they need to be perceived as impartial by the participants in the interviews and workshops. When selecting the facilitator, leaders should ask themselves what agenda he or she may bring to the table (or even if they do not, what agenda others may think they bring) and what tensions could arise because of it. 

    To avoid potential complications, many organizations use an external consultant at this stage of the process. A skilled facilitator will take care to elicit all viewpoints, not just the ones that come from the loudest voices. They will also use appropriate techniques to reach decisions in a collaborative way and mitigate sources of tension. When using an external consultant instead of an internal resource, make sure that they have done their homework and understand the politics that underlie the conversation.

     3. Going overboard with measures 

    What gets measured gets done – so the more measures, the better, right? 

    Not really. Strategic measures – those presented as part of a Balanced Scorecard – are intended to paint a high-level picture of the progress of a particular objective. Measures that contribute to a more nuanced, granular picture, while important, do not belong on the scorecard, where they only serve to obfuscate the vital information that will be used to lead discussions during strategy reviews. 

    A glut of poorly curated information is nearly as useless as not enough information. Smart organizations choose one or possibly two measures per objective that are indicative of the health of that objective. The point of these measures is to capture a trend over time in a way that is immediately apparent. Conclusions drawn from additional measures should be reflected in the objective’s performance analysis, and the additional measures should be publicly available, but the scorecard itself should remain clean and uncluttered. 

    Organizations rarely choose the optimal measures at the outset of the scorecard implementation. The most successful organizations revisit their strategic measures periodically to ensure that the intent – taking the temperature of an objective, so to speak – is being upheld. If the measure points to a different conclusion than the performance analysis, it must be reconsidered.

    4.Failing to house data centrally Done

    Well, the Balanced Scorecard promotes transparency across even large, complex organizations by ensuring that there is one version of the truth and that it is accessible throughout the organization. Done poorly, the large amount of data that contributes to a mature scorecard (or, more likely, series of cascaded scorecards) is a major headache. 

    Successful organizations make information management a priority. Balanced Scorecard software offers a simple solution for housing all data in one place, but organizations that cannot make the investment will often manage their scorecards using Excel or even PowerPoint. Whatever the management system, it is vital that the information live in a single document. By allowing information to reside in numerous pockets scattered here and there throughout the organization – a particular concern when using document types that can be saved to local hard drives – organizations run the risk of version control issues. 

    In the best case scenario, organizations will integrate their information management systems directly with their scorecard management. Savvy leaders recognize that employee attitudes towards the strategy management system can make or break its long-term success. By housing data in a central, accessible location, leaders demonstrate transparency and honesty. “De-mystifying” the scorecard by making the information viewable to anyone who is interested helps to break down potential sources of resistance.

    5. Allowing the Strategy Map to become just a piece of artwork 

    The process of creating the Strategy Map is in itself incredibly beneficial to an organization. By not only articulating what the organization plans to achieve but also breaking that plan into its underlying components, organizations by default will refine their priorities and improve their focus. That said, this process is the tip of the iceberg – necessary, but certainly not sufficient. Because creating the Strategy Map is reasonably time-intensive (not to mention debate-intensive), the effort involved can leave organizations with a false sense of accomplishment. 

    After it has been finished, the Strategy Map is hung on the wall and all too often allowed to become a piece of artwork: nice to look at, but of aesthetic value only. Leadership teams can be unwittingly blind to this phenomenon, since, having undergone the effort of creating the Strategy Map in the first place, they are predisposed to see it as a more important piece of work than their employees do. A simple test is to ask a mid-level employee to describe how their work fits into the Strategy Map without any prior preparation. If they cannot, the leadership team has more work to do. 

    An organization cannot overestimate the importance of communicating and socializing the Balanced Scorecard framework in general and the Strategy Map in particular. The communication truism “seven times in seven ways” is particularly apropos here. Hanging the Strategy Map on the wall is simply not enough – instead, organizations should seize the opportunity to exercise their creativity and find memorable ways to make the Strategy Map highly visible. 

    The ultimate goal is not simply to familiarize employees with the contents of the map but to encourage its use as a tool to guide informed, intelligent business decisions. The most successful organizations will see everyone from senior leaders to front-line managers referring regularly to the Strategy Map in their day-to-day business.

    6. Confusing operations with strategy Operations

     protect value – these are the things that need to be done in order to keep business running as usual. Strategy creates value – these are the things that need to be done differently to reach the desired future state. The line between the two is not always clear-cut (for example, if an organization is on a downward trajectory such that continuing with business as usual actually destroys value, then making operational improvements to halt that trajectory would actually be a strategic initiative), but nevertheless it is an important distinction to make.

    Because operational concerns have near-term consequences, they have a tendency to creep into discussions of strategy. Organizations find themselves mired in operational details at the expense of the long-term strategy. While leaders do typically distinguish between operational and strategic review meetings, they often have a harder time adhering to a purely strategic agenda during strategy reviews when operational concerns are simply too pressing. It is incumbent upon the leader of the meeting to set clear guidelines for strategy review meetings and to nip operational conversations in the bud. 

    This separation between the operational and the strategic is not to say that operations are unimportant – to the contrary, they are vital to the success of the organization – but rather that, left unchecked, they will eat up the time needed to review and refine strategy. By carving out time to deal exclusively with longer-term concerns, organizations guarantee that their strategy management system sustains its momentum. 

    7. Failing to optimize strategy review meetings 

    Much to the chagrin of executives everywhere, time is a finite resource. Wasting it in an interminable strategy review meeting is a sure-fire way to make the Balanced Scorecard the object of resentment. As with any meeting, time will be best spent if the agenda is set in advance and participants come to the meeting prepared. The Balanced Scorecard core team should prepare reading material well in advance of the meeting that includes the latest measure data and performance analyses for each objective. 

    In addition, the meeting facilitator should circulate the agenda ahead of time and, both before and during the meeting, ensure that the conversation focuses on the issues raised by the data, not on reviewing the data itself. The most obvious agenda – addressing each objective beginning at the top of the Strategy Map and working down to the bottom – is not necessarily the best use of time. Consider beginning with the bottom of the map.

    The causeand-effect structure of the Strategy Map means that the lowest objectives tend to be the most complex, have the greatest impact, and elicit the most debate. Relegating the thornier topics of conversation to the end of the meeting means that there is rarely time to discuss them in full. Furthermore, organizations should resist the urge to go through every objective one by one, even though every objective should be tracked and updated. As the strategy review process becomes more mature, organizations can optimize the meeting by budgeting time for only those topics that require a deep dive and merely skimming the surface of the rest. 

    8. Assessing performance with rose-tinted glasses 

    It is human nature to react to measurement – in particular, measurement of one’s own performance – by trying to put a positive spin on it. It is not uncommon to find, at the outset of the implementation process, that the scorecard is covered with green and yellow indicators. Leaders need to maintain a robust relationship with reality and look critically at a positive assessment right out of the gate: if this assessment is accurate, why do we not already see the outcomes we are trying to attain? 

    As tempting as it is to assign blame to the mid-level employees who provided the falsely positive assessments in the first place, the blame lies with leaders who fail to manage change. Especially when the Balanced Scorecard is first being introduced, leaders need to do everything in their power to explain not just what is being measured but to what end. They need to actively solicit honest assessments, even if they are not positive. If initially the scorecard is mostly red, leaders ought not to feel dismayed – how else can they pinpoint what needs to change?

     Ultimately, leaders need to create an environment in which their employees see measurement as a vehicle for improvement, not for blame. The manner in which this change takes place is largely situational and will be dictated by organizational culture, but leaders across the board will benefit by practicing what they preach. By looking at “red” measures as the first step in uncovering a problem and publicly celebrating upward trends even when the goal has yet to be reached, leaders incentivize honesty instead of false positivity.

    9. Underestimating the importance of communication 

    Think you haven’t communicated enough? You haven’t. Do it some more. At the outset of the Balanced Scorecard implementation, organizations tend to make communication a higher priority. In the excitement of doing something new, sharing information with the organization as a whole and educating them on the new system simply makes sense. As that excitement wanes and the Balanced Scorecard settles into the normal routine, it can slip quietly from the organization’s radar.

    Alternatively, they hold off on communication until they get it just right – but it never is. The most successful organizations will partner their Balanced Scorecard team with their marketing and communications department to find creative ways to keep the organization as a whole informed and engaged, both initially and once the Balanced Scorecard effort is well underway. Making an investment in communication is anything but a frivolous expense, even when the delivery method – think mascots, comic strips, and good-natured competition – might veer into silliness. 

    Making sure the entire organization stays informed displays trust and openness, giving employees a reason to buy in to a program and feel that their input matters. Keeping the strategy top-of-mind across the organization, not just the leadership team, yields insights from surprising places. Further, celebrating the progress made by a given department or team motivates the rest of the organization and spurs friendly competition. 

    10. Neglecting education and training 

    The Balanced Scorecard is a deceptively straightforward concept. “It’s simple, just do it!” is a recipe for disaster. For all that it is an easy concept to grasp, putting the Balanced Scorecard into play is a complex process. Organizations need to be realistic in assessing the readiness of their staff to implement the program and commit to filling gaps in expertise. 

    An upfront investment in preparing the key players in the implementation process helps organizations avoid wasted time and energy. Further, organizations ought to assess not just the readiness of the team as a whole but the knowledge gaps of each individual. For example, the Balanced Scorecard sponsor needs to grasp the process as a whole and have a working knowledge of best practices, but the administrative details can be left to a more junior team member. 

    That team member, on the other hand, needs to have a deep understanding of the mechanics of the scorecard and be prepared to manage the minutiae of an intricate system. Addressing these potential knowledge gaps at the outset rather than scrambling to play catchup down the road will save organizations time and money. 

    11. Surrounding yourself with the same old folks

     Organizations that keep discussions of strategy to the rarified few – usually the executive leadership team alone – are missing out. One of the tremendous benefits of the Balanced Scorecard is the opportunity it provides for crossfunctional discussions that break down traditional silos and bring forth voices that might otherwise not be heard. 

    The smartest organizations will embrace the dissonant voice and the unusual opinion and welcome their input, not disregard it for contradicting inherited wisdom. The Strategy Map development process usually begins with a series of interviews that inform the first draft of the Strategy Map. While these interviews nearly always include the entire leadership team, organizations should ask who else could add a useful voice to the conversation, particularly one that is infrequently heard. 

    This voice could represent customer-facing employees, stakeholders, or support functions that do not have a seat at the executive table. Once the Strategy Map is finalized, organizations will often establish teams to manage each perspective or theme. Consider mixing teammates from across departments and functions. The disparate perspectives only lead to a richer, more informed dialogue.

    12. Failing to evolve 

    After the work involved in creating the Balanced Scorecard, it is tempting to hold it as gospel and resist making changes, but successful organizations know that it needs to be a living, changing document for it to be valuable. The Balanced Scorecard works best for managing a mid-range strategy, which for most organizations is between three and five years, depending on the rate of environmental change. A scorecard should always be created with the assumption that it will be dismantled and reengineered a few short years down the road. 

    Even within that three- to five-year range, the Balanced Scorecard ought not to be a static document. Throughout the strategy review process, organizations must ask themselves not only whether they are making progress towards their goals, but whether their underlying assumptions continue to hold true. Adjusting measures, targets, objectives, or even the structure of the Strategy Map in response to incorrect assumptions or a changing external environment is an expected component of the strategy review process. 

    Organizations can ensure that they take a critical look at the underlying components of their strategy by purposefully planning it into their governance cycle. A typical timeframe for a strategy refresh (as opposed to the rewriting of the strategy that takes place after three to five years) is once annually, though in a particularly fast-moving industry, it may be necessary to do so more often. Though the leadership team will make final determinations, suggestions for changes and improvement often come from the theme or perspective teams, and it is worth actively soliciting their input. 

    For smaller tweaks (adjusting measures, etc.), organizations should consider giving theme or perspective teams the autonomy to make the determination on their own rather than use time during strategy review meetings. Just as successful strategy management systems tend to share a set of best practices, failed or even just subpar implementations often come from the same set of poor behaviors and incorrect assumptions. 

    At the heart of most failed Balanced Scorecard implementations is the fallacy that a simple framework will lead to a simple process. Organizations just beginning the process ought not to be discouraged. Knowing what mistakes to look for allows you to cut them off at the pass. By preparing adequately – and above all, being willing to test and adapt when things are not going as expected – a successful scorecard implementation is well within reach. 

    View at the original source

    0 0

    How to build the perfect workplace

    The secret to attracting and holding onto the world’s best talent isn’t about the perks—it’s about relationships.

    The one thing absolutely everyone knows about working at Google is that you get free, gourmet-quality food all day long. Stuffed quail, lavender pecan cornbread, aloo gobi, fresh fruits and vegetables, Gruyère mac and cheese—just go get it. Many know also that Google provides free gyms, free massages, and generous parental leave, plus cash bonuses when a baby is born; dogs are welcome. 
    Beautiful offices are about to be upgraded in a spectacular planned new headquarters in Mountain View, Calif., the New York Times reported in late February (though details on the campus were sketchy at presstime). 
    So when people see that Google  GOOG 0.22%  is No. 1 on Fortune’s new ranking of America’s Best Companies to Work For—for the sixth time—they understandably figure the reason must be those incredible employee perks. But that isn’t why. Knockout perks aren’t the reason any company makes this list. The essence of a great workplace is just that: an essence, an indispensable quality that determines its character.
    Understanding that quality—understanding it well enough to build a corporate organization around it—has long been a goal of great companies. And it’s getting rapidly more valuable too. That’s because as the economy changes, employers who don’t know the secret will be at a deepening disadvantage to those who do.
    Which brings us back to those famous Google perks. The truth is, while the most sought-after talent doesn’t generally flock to a company because of certain benefits and giveaways (nice as they may be), the perks themselves can teach us about the company’s essence—why, that is, some employers are such super-powerful magnets for the world’s best employees year after year. Listen to what an ex-Googler told about Google’s nonstop free buffet: It “helps me build relationships with my colleagues.”
    Hold on—food helps build relationships? It does when it’s used right. Data-obsessed Google measures the length of the cafeteria lines to make sure people have to wait a while (optimally three to four minutes) and have time to talk. It makes people sit at long tables, where they’re likelier to be next to or across from someone they don’t know, and it puts those tables a little too close together so you might hit someone when you push your chair back and thus meet someone new—the Google bump, employees call it. And now we begin to see the real reason Google offers all that fantastic free fare: to make sure workers will come to the cafeterias, where they’ll start and strengthen personal relationships.
    That is, the food is just a tool for reaching a goal, and the goal is strong, numerous, rewarding personal relationships. Success obviously requires more than free food, but we’re glimpsing the explanation of workplace greatness. That same Googler said, “The best perk of working at Google is working at Google,” and the No. 1 reason he gave was the people: “We are surrounded by smart, driven people who provide the best environment for learning I’ve ever experienced.” (For more on the company’s people strategy, see “Google’s 10 Things to Transform Your Team and Your Workplace” in this issue.)
    Here’s the simple secret of every great place to work: It’s personal—not perkonal. It’s relationship-based, not transaction-based. Astoundingly, many employers still don’t get that, though it was the central insight of Robert Levering and Milton Moskowitz when they assembled the first 100 Best list in the early 1980s. (For their insights into this year’s ranking, see their introduction to the list.) 
    “The key to creating a great workplace,” they said, “was not a prescriptive set of employee benefits, programs, and practices, but the building of high-quality relationships in the workplace.” Reaching far deeper into people than corporate benefits and cool offices ever can, those relationships are why some workers love their employers and hate to leave and why job applicants will crawl over broken glass to work at those places.
    In the past, of course, plenty of non-great places to work have managed to succeed without mastering this understanding. And many, no doubt, will continue to thrive. But all evidence suggests that this track is about to get much harder. Big, deep structural changes in the economy are likely to boost the advantages that great employers already enjoy in the marketplace and penalize even more the companies that fall behind.
    It isn’t just because human capital is growing more valuable in every business. That trend has been going on for decades as ever fewer workers function as low-maintenance machines—turning a wrench in a factory, for example—and more become thinkers and creators. 
    The remarkable thing is that while most trends eventually peter out, this one just keeps going. Intangible assets, mostly derived from human capital, have rocketed from 17% of the S&P 500’s market value in 1975 to 84% in 2015, says the advisory firm Ocean Tomo. Even a manufacturer like Stryker gets 70% of its value from intangibles; it makes replacement knees, hips, and other joints loaded with intellectual capital.
    Companies will continue to gain a competitive advantage by attracting and keeping the most valuable workers, which is reason enough to become a great workplace. But interestingly, there’s a shift here as well—namely, in who is considered valuable. For decades—since Peter Drucker coined the term in the late 1950s—the MVPs were the so-called knowledge workers. But that term is no longer an apt description of the most prized personnel. The straightforward reason is that knowledge is becoming commoditized.
     Information, simple or complex, is instantly available online. Knowledge skills that must be learned—corporate finance, trigonometry, electrical engineering, coding—can be learned by anyone worldwide through online courses, many of them free. They can even be performed by a clever algorithm. Knowledge remains hugely important, but it’s gradually becoming less of a competitive advantage.
    As technology takes over more of the fact-based, rules-based, left-brain skills—knowledge-worker skills—employees who excel at human relationships are emerging as the new “it” men and women. More and more major employers are recognizing that they need workers who are good at team building, collaboration, and cultural sensitivity, according to global forecasting firm Oxford Economics. 
    Other research shows that the most effective teams are not those whose members boast the highest IQs, but rather those whose members are most sensitive to the thoughts and feelings of others. MIT professor Alex “Sandy” Pentland, a renowned data scientist who directs that institution’s Human Dynamics Laboratory, has aptly summed up the new reality: “It is not simply the brightest who have the best ideas; it is those who are best at harvesting them from others. It is not only the most determined who drive change; it is those who most fully engage with like-minded people. And it is not wealth or prestige that best motivates people; it is respect and help from peers.”
    Yup, these are the new corporate MVPs.
    Many companies will struggle with finding and luring these top workers—as well as employing them in ways that get the most out of their interpersonal skills. But the best companies to work for are, mostly, already there. Creating and building relationships is the essence of what they do. Consider SAS, the giant software firm that’s one of the few companies to appear on our 100 Best ranking every year since we started publishing it in 1998. The firm surveys employees annually on the state of their relationships: 
    Are they getting open communication and respect from fellow employees? Are they being treated like human beings? Or look at another regular on the 100 Best, the Wegmans supermarket chain. Here’s a typical employee comment: “Co-workers really care about each other on both a professional and personal level.” The perks at these companies are pretty darn good. But it’s the employees themselves that make them great places to work.
    You’ve realized by now that we’re talking about culture, the way people behave from moment to moment without being told. More employers are seeing the connection from culture and relationships to workplace greatness to business success. Deloitte’s latest annual survey of 3,300 executives in 106 countries found that, for the first time, top managers say culture is the most important issue they face, more important than leadership, workforce capability, performance management, or anything else. 
    “Culture” was Merriam Webster’s 2014 word of the year. It’s everywhere. Yet as employers increasingly grasp its importance, they also realize they have no clue where to begin in creating the culture they need.
    DYN.03.15.15.New Dynamic.01
    Illustration by Tado for Fortune
    Let the 100 Best offer a few hints. They focus on four elements of culture that make the most difference:
    Mission. These companies are pursuing a larger purpose, and company leaders make sure no one forgets it. Whole Foods is improving customers’ health and well-being; USAA is supporting members of the U.S. military and their families; REI is helping people enjoy the outdoors sustainably. When employees are all pursuing a mission they believe in, relationships get stronger.
    Colleagues. Several of the 100 Best also appear on lists of companies where it’s hardest to get hired; 14 of them, including Twitter, St. Jude Children’s Research Hospital, and the Container Store, attract more than 100 applicants for every job opening. Those companies can hire the cream of the crop, creating a self-reinforcing cycle; the best people want to go where the best people are.
    Trust. We all know this: Show people that you consider them trustworthy, and they’ll generally prove you right. Many of the 100 Best let employees work whenever they want, and they work far more than if they were punching a clock. Riot Games, maker of League of Legends, even offers unlimited paid vacation; strong relationships prevent employees from abusing the policy.
    Caring. Every company says it values employees. The 100 Best don’t say it; they show it. This is where some of those celebrated perks do count. Google, for example, offers an employee benefit it has never publicized: If an employee dies, his or her spouse receives half the employee’s salary for a decade. No words could send as clear a message. A true culture of caring goes beyond perks and includes daily behavior—see Leigh Gallagher’s story on Marriott in this issue.
    And yes, in case you harbored doubts, the 100 Best really do outperform other companies as investments. Analysis of the publicly traded firms in the rankings from 1984 through 2009 by Wharton’s Alex Edmans found that a portfolio of 100 Best Companies exceeded its expected risk-adjusted return by 3.5% a year. 
    That’s what Wall Street calls alpha, and 3.5% annually over 25 years is a stupendous performance. The puzzle is how it’s possible. Why don’t investors realize that great places to work are also great investments, and bid up the stock price as soon as Fortune’s annual list is published, eliminating the subsequent outperformance? Edmans exhaustively investigated several hypotheses and concluded that investors just don’t get it—they simply don’t understand that great workplaces work better.
    A corollary is that most employers don’t get it either. Why do they let the 100 Best clean their clocks year after year, when the secret is no secret at all? The answer is a mystery. We know 100 companies that hope the others never figure it out.

    0 0

    ‘Bionic eye’ helps man see wife for the first time in 10 years

    Allen Zderad lost a career in science because of a degenerative eye disease. Now, science is allowing him to see his wife for the first time in 10 years.

    The 68-year-old former chemist from Minnesota recently became the recipient of a “bionic eye” implant, a chip with electrodes implanted in his retina that interacts with a camera in Zderad’s glasses. The camera and wearable computer pack sends information to the electrodes, which then send the information on to the optic nerve.

    The result doesn’t allow Zderad to see any detail, but he can make out shapes and forms, allowing him to navigate places without a cane, according to the Mayo Clinic News Network. After an operation at the Rochester, Minn., clinic in January, Zderad had the camera placed over his eyes a couple of weeks later and reacted to those first sensations of sight with a mixture of excited laughter and joyful tears.

    He hugged his wife and the two held hands for a moment.

    “It’s crude but it’s significant,” Zderad tells his ophthalmologist, Dr. Raymond Iezzi, as he grabs his hand in a Mayo Clinic video of the event. “You know, it will work.”

    Iezzi was treating one of Zderad’s grandsons, who is in the early stages of the same eye disease, retinitis pigmentosa, for which there currently is no treatment or cure. Iezzi wanted to see his patient’s grandfather, looking for someone to be part of the first clinical trial in Minnesota involving the eye implant.
    The device, developed by Second Sight Inc. and implanted in only 14 other patients so far, creates “artificial vision” in the retina’s photoreceptors, which become damaged by the disease, Iezzi said.

    “These photoreceptors are like pixels in the eye,” Iezzi said. “The retina in this patient is healthy except for the photoreceptors and so what we’re trying to do is replace the function of the photoreceptors with the retinal prosthesis.”

    Zderad has weeks of therapy and adjustments ahead to adapt to the device, but he already can make out his wife in a group of people.

    “It’s easy,” Zderad said. “She’s the most beautiful one in the room.”

    View at the original source

    0 0

    How Big Data from Space helps  Life on Earth

    As an oceanographer and former NASA astronaut, I am particularly well placed to appreciate the perspectives space can give us on life on earth. My first glimpse of our blue planet stole my breath and has never let it go.
    I have been working to deepen our understanding of and appreciation for this planet since. Key to that understanding are the observational data – much of it from satellites – that feed our knowledge of this planet. Among other things, observations from satellites help us to understand our changing climate, predict hazardous weather and provide early warning of potential crop failures or freshwater shortages.
    The big data revolution could lead to currently unimagined uses for the data we receive from satellites. Entrepreneurs could come up with new applications and ideas for mashing up data. But the data itself should, I believe, be regarded as a public good. How to guarantee this, in a world where public budgets are squeezed and space exploration is becoming increasingly affordable for private players, is a question that deserves serious thought and active engagement.
    From fish in Peru to drought in Australia
    It is worth reflecting on the sobering fact that we are the first generation of humans that could even have this conversation. Just over four decades ago, nobody would even have thought to connect variations in the catch of Peruvian fisheries, say, with unseasonably dry spells in central Australia. It was only with the availability of snapshots from satellites in the 1970s that we could identify and begin to understand the phenomenon that linked them: El Nino.
    Since then our uses of data from space have become increasingly sophisticated. It is bordering on miraculous, for example, that we can have a reasonable degree of confidence in long-range weather forecasts. Weather patterns are so complex, chaos ought to overwhelm predictability once we look just a day or two ahead. But by analyzing patterns from thousands of different kinds of daily observations over the years, we have become better able to tease out the likeliest patterns.
    No single satellite can make all the observations necessary to compile a reliable weather forecast. Indeed, no single country’s satellites can do so. There has developed, therefore, a convention of data sharing among government-run space programmes to enable each country’s meteorological offices to access all the information they need to predict the weather.
    Data as a public good
    This is what I mean by regarding data as a public good. The ability to forecast hurricanes, typhoons, droughts and heatwaves is clearly of benefit to humanity as a whole, and the data on which it relies is deservedly regarded as part of the global commons.
    I believe we should take the same approach to all kinds of “environmental intelligence” represented by satellite data, in combination with sensors on the ground, whenever it has implications that transcend national borders – where population’s lives and livelihoods are at stake. By analyzing the reflections of microwaves beamed at forests, for example, we can tell when their ecosystems are under stress; measurements of ocean temperatures help us to predict where fish will be; observations from space can warn about problems with soil conditions that could help the world to prepare for poor harvests.
    As technology advances, so does the capacity to generate actionable intelligence. In recent years, for instance, satellites have allowed us to map differences in gravity on the Earth’s surface so precisely that we can calculate how much groundwater is stored in aquifers – something never before possible. Given the potential of freshwater shortages to impact everything from food security to energy supplies and geopolitical tensions, it is clearly beneficial for this knowledge to be in the public domain.
    Katchy Sullivan
    “The price could be paid in human lives”
    The question of how to ensure space-based knowledge is used for the common good has become pressing with the dawning of a new space age, in which satellites have become affordable for private interests. At the same time, public finances in countries which have traditionally funded major space programmes have come under stress. Increasingly, there is pressure on governments to buy in data from private providers rather than fund satellite programmes themselves.
    At first glance, this makes sense. But some changes in the private sector’s role in space raise troubling hypotheticals. Imagine that a commodity trader, for example, monopolized data that enabled harvests to be predicted. A killing could be made on the futures markets – but the price could be paid in human lives, if exclusion from that data hindered public agencies from preparing for famine.
    As private satellites proliferate and the big data revolution advances, we need to debate public and private roles in space. One model to consider is the Monsanto-owned Climate Corporation. It takes publicly available data and adds value by analyzing it in ways that generate guidance individuals will pay for: when a farmer should irrigate a field, for example.  The underlying public data remain freely available – even viewable on a the free level of the company’s web service – and so continue to serve the general public via advanced warning of severe drought or accurate forecasts of seasonal flooding.
    In the coming decades, new technologies and business models will radically expand the data available from satellites and the uses to which it can be put. Our challenge is to ensure that observations about our planet benefit everyone who lives on it.

    0 0

    Where the Digital Economy Is Moving the Fastest

    The transition to a global digital economy in 2014 was sporadic – brisk in some countries, choppy in others. By year’s end, the seven biggest emerging markets were larger than the G7, in purchasing power parity terms. Plus, consumers in the Asia-Pacific regionwere expected to spend more online last year than consumers in North America. The opportunities to serve the e-consumer were growing – if you knew where to look.

    These changing rhythms in digital commerce are more than a China, or even an Asia, story. Far from Silicon Valley, Shanghai, or Singapore, a German company, Rocket Internet, has been busy launching e-commerce start-ups across a wide range of emerging and frontier markets. Their stated mission: To become the world’s largest internet platform outside the U.S. and China. Many such “Rocket” companies are poised to become the Alibabas and Amazons for the rest of the world: Jumia, which operates in nine countries across Africa; Namshi in the Middle East; Lazada and Zalora in ASEAN; Jabong in India; and Kaymu in 33 markets across Africa, Asia, Europe, and the Middle East.

    Private equity and venture capital money have been concentrating in certain markets in ways that mimic the electronic gold rush in Silicon Valley. During the summer of 2014 alone $3 billion poured into India’s e-commerce sector, where, in addition to local innovators like Flipkart and Snapdeal, there are nearly 200 digital commerce startups flush with private investment and venture capital funds. This is happening in a country where online vendors largely operate on a cash-on-delivery (COD) basis. Credit cards or PayPal are rarely used; according to the Reserve Bank of India, 90% of all monetary transactions in India are in cash. Even Amazon localized its approach in India to offer COD as a service. India and other middle-income countries such as Indonesia and Colombia all have high cash dependence. But even where cash is still king, digital marketplaces are innovating at a remarkable pace. Nimble e-commerce players are simply working with and around the persistence of cash.

    To understand more about these types of changes around the world, we developed an “index” to identify how a group of countries stack up against each other in terms of readiness for a digital economy. Our Digital Evolution Index (DEI), created by the Fletcher School at Tufts University (with support from Mastercard and DataCash), is derived from four broad drivers: 

    supply-side factors (including access, fulfillment, and transactions infrastructure); 

    demand-side factors (including consumer behaviors and trends, financial and Internet and social media savviness); 

    innovations (including the entrepreneurial, technological and funding ecosystems, presence and extent of disruptive forces and the presence of a start-up culture and mindset); 

    and institutions (including government effectiveness and its role in business, laws and regulations and promoting the digital ecosystem). The resulting index includes a ranking of 50 countries, which were chosen because they are either home to most of the current 3 billion internet users or they are where the next billion users are likely to come from.

    As part of our research, we wanted to understand who was changing quickly to prepare for the digital marketplace and who wasn’t. Perhaps not surprisingly, developing countries in Asia and Latin America are leading in momentum, reflecting their overall economic gains. But our analysis revealed other interesting patterns. Take, for example, Singapore and The Netherlands. Both are among the top 10 countries in present levels of digital evolution. But when we consider the momentum – i.e., the five-year rate of change from 2008 to 2013 – the two countries are far apart. Singapore has been steadily advancing in developing a world-class digital infrastructure, through public-private partnerships, to further entrench its status as a regional communications hub. 

    Through ongoing investment, it remains an attractive destination for start-ups and for private equity and venture capital. The Netherlands, meanwhile, has been rapidly losing steam. The Dutch government’s austerity measures beginning in late 2010 reduced investment into elements of the digital ecosystem. Its stagnant, and at times slipping, consumer demand led investors to seek greener pastures.

    Based on the performance of countries on the index during the years 2008 to 2013, we assigned them to one of four trajectory zones: Stand Out, Stall Out, Break Out, and Watch Out.

    • Stand Out countries have shown high levels of digital development in the past and continue to remain on an upward trajectory.
    • Stall Out countries have achieved a high level of evolution in the past but are losing momentum and risk falling behind.
    • Break Out countries have the potential to develop strong digital economies. Though their overall score is still low, they are moving upward and are poised to become Stand Out countries in the future.
    • Watch Out countries face significant opportunities and challenges, with low scores on both current level and upward motion of their DEI. Some may be able to overcome limitations with clever innovations and stopgap measures, while others seem to be stuck.
    Break Out countries such as India, China, Brazil, Vietnam, and the Philippines are improving their digital readiness quite rapidly. But the next phase of growth is harder to achieve. Staying on this trajectory means confronting challenges like improving supply infrastructure and nurturing sophisticated domestic consumers.

    Watch Out countries like Indonesia, Russia, Nigeria, Egypt, and Kenya have important things in common like institutional uncertainty and a low commitment to reform. They possess one or two outstanding qualities — predominantly demographics — that make them attractive to businesses and investors, but they expend a lot of energy innovating around institutional and infrastructural constraints. Unclogging these bottlenecks would let these countries direct their innovation resources to more productive uses.

    Most Western and Northern European countries, Australia, and Japan have been Stalling Out. The only way they can jump-start their recovery is to follow what Stand Out countries do best: redouble on innovation and continue to seek markets beyond domestic borders. Stall Out countries are also aging. Attracting talented, young immigrants can help revive innovation quickly.

    What does the future hold? The next billion consumers to come online will be making their digital decisions on a mobile device – very different from the practices of the first billion that helped build many of the foundations of the current e-commerce industry. There will continue to be strong cross-border influences as the competitive field evolves: even if Europe slows, a European company, such as Rocket Internet, can grow by targeting the fast-growing markets in the emerging world; giants out of the emerging world, such as Alibaba, with their newfound resources and brand, will look for markets elsewhere; old stalwarts, such as Amazon and Google will seek growth in new markets and new product areas.

     Emerging economies will continue to evolve differently, as will their newly online consumers. Businesses will have to innovate by customizing their approaches to this multi-speed planet, and in working around institutional and infrastructural constraints, particularly in markets that are home to the next billion online consumers.

    We may be on a journey toward a digital planet — but we’re all traveling at different speeds.

    0 0

    Data Scientist: The Sexiest Job of the 21st Century

    When Jonathan Goldman arrived for work in June 2006 at LinkedIn, the business networking site, the place still felt like a start-up. The company had just under 8 million accounts, and the number was growing quickly as existing members invited their friends and colleagues to join. But users weren’t seeking out connections with the people who were already on the site at the rate executives had expected. Something was apparently missing in the social experience. As one LinkedIn manager put it, “It was like arriving at a conference reception and realizing you don’t know anyone. So you just stand in the corner sipping your drink—and you probably leave early.”
    Goldman, a PhD in physics from Stanford, was intrigued by the linking he did see going on and by the richness of the user profiles. It all made for messy data and unwieldy analysis, but as he began exploring people’s connections, he started to see possibilities. He began forming theories, testing hunches, and finding patterns that allowed him to predict whose networks a given profile would land in. He could imagine that new features capitalizing on the heuristics he was developing might provide value to users. But LinkedIn’s engineering team, caught up in the challenges of scaling up the site, seemed uninterested. Some colleagues were openly dismissive of Goldman’s ideas. Why would users need LinkedIn to figure out their networks for them? The site already had an address book importer that could pull in all a member’s connections.
    Luckily, Reid Hoffman, LinkedIn’s cofounder and CEO at the time (now its executive chairman), had faith in the power of analytics because of his experiences at PayPal, and he had granted Goldman a high degree of autonomy. For one thing, he had given Goldman a way to circumvent the traditional product release cycle by publishing small modules in the form of ads on the site’s most popular pages.
    Through one such module, Goldman started to test what would happen if you presented users with names of people they hadn’t yet connected with but seemed likely to know—for example, people who had shared their tenures at schools and workplaces. He did this by ginning up a custom ad that displayed the three best new matches for each user based on the background entered in his or her LinkedIn profile. Within days it was obvious that something remarkable was taking place. The click-through rate on those ads was the highest ever seen. Goldman continued to refine how the suggestions were generated, incorporating networking ideas such as “triangle closing”—the notion that if you know Larry and Sue, there’s a good chance that Larry and Sue know each other. Goldman and his team also got the action required to respond to a suggestion down to one click.
    The shortage of data scientists is becoming a serious constraint in some sectors.
    It didn’t take long for LinkedIn’s top managers to recognize a good idea and make it a standard feature. That’s when things really took off. “People You May Know” ads achieved a click-through rate 30% higher than the rate obtained by other prompts to visit more pages on the site. They generated millions of new page views. Thanks to this one feature, LinkedIn’s growth trajectory shifted significantly upward.

    A New Breed

    Goldman is a good example of a new key player in organizations: the “data scientist.” It’s a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.) But thousands of data scientists are already working at both start-ups and well-established companies. Their sudden appearance on the business scene reflects the fact that companies are now wrestling with information that comes in varieties and volumes never encountered before. If your organization stores multiple petabytes of data, if the information most critical to your business resides in forms other than rows and columns of numbers, or if answering your biggest question would involve a “mashup” of several analytical efforts, you’ve got a big data opportunity.
    Much of the current enthusiasm for big data focuses on technologies that make taming it possible, including Hadoop (the most widely used framework for distributed file system processing) and related open-source tools, cloud computing, and data visualization. While those are important breakthroughs, at least as important are the people with the skill set (and the mind-set) to put them to good use. On this front, demand has raced ahead of supply. Indeed, the shortage of data scientists is becoming a serious constraint in some sectors. Greylock Partners, an early-stage venture firm that has backed companies such as Facebook, LinkedIn, Palo Alto Networks, and Workday, is worried enough about the tight labor pool that it has built its own specialized recruiting team to channel talent to businesses in its portfolio. “Once they have data,” says Dan Portillo, who leads that team, “they really need people who can manage it and find insights in it.”

    Who Are These People?

    If capitalizing on big data depends on hiring scarce data scientists, then the challenge for managers is to learn how to identify that talent, attract it to an enterprise, and make it productive. None of those tasks is as straightforward as it is with other, established organizational roles. Start with the fact that there are no university programs offering degrees in data science. There is also little consensus on where the role fits in an organization, how data scientists can add the most value, and how their performance should be measured.
    The first step in filling the need for data scientists, therefore, is to understand what they do in businesses. Then ask, What skills do they need? And what fields are those skills most readily found in?
    More than anything, what data scientists do is make discoveries while swimming in data. It’s their preferred method of navigating the world around them. At ease in the digital realm, they are able to bring structure to large quantities of formless data and make analysis possible. They identify rich data sources, join them with other, potentially incomplete data sources, and clean the resulting set. In a competitive landscape where challenges keep changing and data never stop flowing, data scientists help decision makers shift from ad hoc analysis to an ongoing conversation with data.
    Data scientists realize that they face technical limitations, but they don’t allow that to bog down their search for novel solutions. As they make discoveries, they communicate what they’ve learned and suggest its implications for new business directions. Often they are creative in displaying information visually and making the patterns they find clear and compelling. They advise executives and product managers on the implications of the data for products, processes, and decisions.
    Given the nascent state of their trade, it often falls to data scientists to fashion their own tools and even conduct academic-style research. Yahoo, one of the firms that employed a group of data scientists early on, was instrumental in developing Hadoop. Facebook’s data team created the language Hive for programming Hadoop projects. Many other data scientists, especially at data-driven companies such as Google, Amazon, Microsoft, Walmart, eBay, LinkedIn, and Twitter, have added to and refined the tool kit.
    What kind of person does all this? What abilities make a data scientist successful? Think of him or her as a hybrid of data hacker, analyst, communicator, and trusted adviser. The combination is extremely powerful—and rare.

    0 0

    Something billions of times brighter than the Sun was detected

    The Sun is measured to be approximately 1.989E30 kg (or about 2,000,000,000,000,000,000,000,000,000,000 kg), while Earth is measured to be 5.972E24 kg, so it is amazing to find objects in space a few times larger than the Sun, which is astronomical. But an object detected by an international team of researchers recently was reported to be 12 billion times larger than the Sun–CBS.
    The study was led by Xue-Bing Wu with the Kavli Institute of Astronomy and Astrophysics, and Peking University. The paper was submitted to the journal Nature, and published 25 February.
    First reported by Astronomy Now, it is considered the brightest quasar found in the early universe. Quasars are enormous black holes that are located at the center of distant galaxies. They accumulate matter from nearby materials, and also are known for releasing a large amount of gravitational energy.
    The phenomena are measured by how much light is emitted that reaches Earth which is stretched out by an expanding universe. Known as the SDSS J0100+2802, it is believed to have formed 900 million years following the Big Bang, and located roughly 12.8 billion light-years from Earth.
    There are only 40 known quasars that have a redshift above 6 (which is the displacement of spectral lines toward what are longer wavelengths, or the red-end of the spectrum in relation to radiation from distant galaxies), marking the start of early universe. This particular quasar is 420 trillion times brighter than the Sun.

    0 0

    How Big Data can drive patient behaviour change 

    How Big Data can drive patient behaviour change
    When a patient is admitted to Khoo Teck Puat Hospital (KTPH) in northern Singapore, chances are the medical team already knows a thing or two about them.
    In a bid to manage growing demand for subsidised beds, the public hospital launched Singapore’s first community healthcare programme in 2011 with the Ageing in Place (AIP) pilot. AIP targeted patients with a history of three or more admissions over a six-month period.
    “What we discovered was that 20 percent of people admitted to the wards contributed to 80 percent of repeats and only 10 percent of the cases were actually health related – the majority were social issues,” said Dr Wong Sweet Fun, Director of the AIP Programme.
    Based on that insight, KTPH then tailored in-home healthcare plans that aimed to cut hospital admissions. Four hundred patients were placed under the programme and the average admission rate fell from 3.5 times in six months to 1.3.
    Hospitals such as KTPH are using data analytics to change the way people access healthcare services in the community and cut the number of occupied ward beds.
    “The preventative healthcare is a bid to move away from ‘sick care’,” said Bastari Irwan, Director of Alexandra Health System’s Population Health Programme. “The KTPH A&E has a capacity for 300 people and we had 400 plus daily. We’re trying to filter out people who should not be coming to A&E.”
    Healthcare providers look to big data to address Asia’s ageing problem
    Total healthcare spending in Asia was estimated at US$1.34 trillion in 2013 and is expected to grow at an annual rate of 10.5 percent to reach US$2.21 trillion in 2018. This is a result of rapidly ageing populations and the growing middle class, who seek access to already strained healthcare systems.
    Like the rest of the region, Singapore’s demographic is changing, pressuring the country’s public healthcare system. Figures from Moody’s show that Singapore is one of the countries to be hit hardest as growth in the working-age population slows from 48.1 percent in 2000-15 to 3.8 percent in 2015-30.
    Big data has benefited sectors from retail to banking, but healthcare has been slow to catch on. In part, this is because of regulatory constraints surrounding privacy and data protection. Reluctance among healthcare professionals to adopt a data-driven approach has also hampered progress.
    However, things are changing. Healthcare providers are looking for innovative ways to boost productivity, according to a PWC report, and analytics technology is supporting this. 95 percent of healthcare CEOs said they were exploring better ways of using and managing big data in 2014.
    Diagnosing unhealthy communities with big data
    In Singapore, KTPH’s work on data analytics is based on the principles of the e-health system: that healthcare is better delivered when you understand who your patients are.
    Following the AIP pilot, the mining of patient information in the community extended to a larger programme. In 2013, they set out to answer a simple question: how healthy is north Singapore?
    “The team screened 4,000 people in north Singapore, aged 40 years and above, for conditions such as high cholesterol and diabetes,” said Dr Mike Wong, who worked with KTPH’s analytics team to develop a unique data dashboard designed for doctors and nurses.
    The team employed geospatial data technology to plot the patients on an area map. This highlighted problematic hotspots, where more unhealthy members of the community lived. For the majority of cases, they were farthest from the hospital.
    This allowed the team to implement proactive solutions such as community pop-up clinics and health and wellbeing talks in strategically positioned locations to reach those deemed at risk.
    “A lot of public screening is outsourced and people are then told to visit a doctor: it’s impersonal,” said Dr Wong.  “If you don’t go upstream and prevent these conditions you end up spending more and more on more hospitals, more staff for example.
    “Patient behaviour makes or breaks a solution. With this model we can teach people how to manage their own health.”
    Dr Wong added: “Up until 2008 all our data was on spread sheets and it took a long time to get results. With Excel it was very one dimensional, we could only sort one thing at a time. Now we can look at two or three variables, such as different conditions and age combinations or where people live.”
    Building robust data-driven healthcare services
    The hospital’s success in using advanced patient analytics can partially be attributed to Singapore’s existing e-health infrastructure. The national e-health record system, which allows each patient to have a single record accessible by any medical centre, was launched in 2009.
    Sharing information across healthcare providers can reduce costs for the patient. The basics such as drug allergy and treatment history aside, data can cut the need for repeat clinical tests and ensure ineffective drugs aren’t prescribed multiple times.
    But building a central database is just the starting point for healthcare analytics to prosper.
    According to Lee Chew Chiat, Executive Director, Consulting and Public Sector Industry Leader, Deloitte Southeast Asia, the potential for data mining technology to benefit the healthcare industry today extends much further. In addition to predicting profiles of frail or elderly patients who are likely to be re-admitted into hospitals, data also allow medical professionals to balance drug efficacy and cost; and determine the locality of disease - dengue fever, for example - for better control.
    “Having consistent basic information of a patient or a consumer is critical in healthcare. It is the foundation and since we have the foundation, Singapore can be a good test-bed,” said Lee.

    0 0

    The Rise and Fall of Cognitive Skills

    Neuroscientists find that different parts of the brain work best at different ages.

    Scientists have long known that our ability to think quickly and recall information, also known as fluid intelligence, peaks around age 20 and then begins a slow decline. However, more recent findings, including a new study from neuroscientists at MIT and Massachusetts General Hospital (MGH), suggest that the real picture is much more complex.
    The study, which appears in the journal Psychological Science, finds that different components of fluid intelligence peak at different ages, some as late as age 40.
    “At any given age, you’re getting better at some things, you’re getting worse at some other things, and you’re at a plateau at some other things. There’s probably not one age at which you’re peak on most things, much less all of them,” says Joshua Hartshorne, a postdoc in MIT’s Department of Brain and Cognitive Sciences and one of the paper’s authors.
    “It paints a different picture of the way we change over the lifespan than psychology and neuroscience have traditionally painted,” adds Laura Germine, a postdoc in psychiatric and neurodevelopmental genetics at MGH and the paper’s other author.
    Measuring peaks
    Until now, it has been difficult to study how cognitive skills change over time because of the challenge of getting large numbers of people older than college students and younger than 65 to come to a psychology laboratory to participate in experiments. Hartshorne and Germine were able to take a broader look at aging and cognition because they have been running large-scale experiments on the Internet, where people of any age can become research subjects.
    Their web sites, and, feature cognitive tests designed to be completed in just a few minutes. Through these sites, the researchers have accumulated data from nearly 3 million people in the past several years.
    In 2011, Germine published a study showing that the ability to recognize faces improves until the early 30s before gradually starting to decline. This finding did not fit into the theory that fluid intelligence peaks in late adolescence. Around the same time, Hartshorne found that subjects’ performance on a visual short-term memory task also peaked in the early 30s.
    Intrigued by these results, the researchers, then graduate students at Harvard University, decided that they needed to explore a different source of data, in case some aspect of collecting data on the Internet was skewing the results. They dug out sets of data, collected decades ago, on adult performance at different ages on the Weschler Adult Intelligence Scale, which is used to measure IQ, and the Weschler Memory Scale. Together, these tests measure about 30 different subsets of intelligence, such as digit memorization, visual search, and assembling puzzles.
    Hartshorne and Germine developed a new way to analyze the data that allowed them to compare the age peaks for each task. “We were mapping when these cognitive abilities were peaking, and we saw there was no single peak for all abilities. The peaks were all over the place,” Hartshorne says. “This was the smoking gun.”
    However, the dataset was not as large as the researchers would have liked, so they decided to test several of the same cognitive skills with their larger pools of Internet study participants. For the Internet study, the researchers chose four tasks that peaked at different ages, based on the data from the Weschler tests. They also included a test of the ability to perceive others’ emotional state, which is not measured by the Weschler tests.
    The researchers gathered data from nearly 50,000 subjects and found a very clear picture showing that each cognitive skill they were testing peaked at a different age. For example, raw speed in processing information appears to peak around age 18 or 19, then immediately starts to decline. Meanwhile, short-term memory continues to improve until around age 25, when it levels off and then begins to drop around age 35.
    For the ability to evaluate other people’s emotional states, the peak occurred much later, in the 40s or 50s.
    Christopher Chabris, an associate professor of psychology at Union College, said a key feature of the study’s success was the researchers’ ability to gather and analyze so much data, which is unusual in cognitive psychology.
    “You need to look at a lot of people to discover these patterns,” says Chabris, who was not part of the research team. “They’re taking the next step and showing a more fine-grained picture of how cognitive abilities differ from one another and the way they change over time.”
    More work will be needed to reveal why each of these skills peaks at different times, the researchers say. However, previous studies have hinted that genetic changes or changes in brain structure may play a role.
    This shows faces of the test subjects.
    Researchers have been running large-scale experiments on the Internet, where people of any age can become research subjects. Their websites feature cognitive tests designed to be completed in just a few minutes. Shown here is a “pattern completion test” from their website, Image credit: Jose-Luis Olivares/MIT (with image courtesy of the researchers).
    “If you go into the data on gene expression or brain structure at different ages, you see these lifespan patterns that we don’t know what to make of. The brain seems to continue to change in dynamic ways through early adulthood and middle age,” Germine says. “The question is: What does it mean? How does it map onto the way you function in the world, or the way you think, or the way you change as you age?”
    Accumulated intelligence
    The researchers also included a vocabulary test, which serves as a measure of what is known as crystallized intelligence — the accumulation of facts and knowledge. These results confirmed that crystallized intelligence peaks later in life, as previously believed, but the researchers also found something unexpected: While data from the Weschler IQ tests suggested that vocabulary peaks in the late 40s, the new data showed a later peak, in the late 60s or early 70s.
    The researchers believe this may be a result of better education, more people having jobs that require a lot of reading, and more opportunities for intellectual stimulation for older people.
    Hartshorne and Germine are now gathering more data from their websites and have added new cognitive tasks designed to evaluate social and emotional intelligence, language skills, and executive function. They are also working on making their data public so that other researchers can access it and perform other types of studies and analyses.
    “We took the existing theories that were out there and showed that they’re all wrong. The question now is: What is the right one? To get to that answer, we’re going to need to run a lot more studies and collect a lot more data,” Hartshorne says.

    0 0

    Neuroimaging study shows how being in love changes the architecture of your brain

    A research team has used neuroimaging techniques to investigate how being in a romantic relationship produces alterations in the architecture of the brain. They found that being in love is associated with increased connectivity between regions of the brain associated with reward, motivation, emotion regulation, and social cognition.
    “This study provides the first empirical evidence of love-related alterations in the underlying functional architecture of the brain,” wrote Hongwen Song and his colleagues, who published their findings February 13 in Frontiers in Neuroscience.
    The researchers used resting state functional magnetic resonance imaging (rsfMRI) to examine differences in patterns of brain connectivity in 100 college students.
    The students were divided into three groups: the “in-love” group, the “ended-love” group, and the “single” group.
    The researchers found increased resting brain activity in the left dorsal anterior cingulate cortex in the in-love group, suggesting this area of the brain is closely related to the state of falling in love.
    Brain activity in the bilateral caudate nucleus, on the other hand, was significantly decreased in the ended-love group. This brain structure is associated with detection of reward, expectation, representation of goals, and integration of sensory input.
    The researchers also found increased connectivity between the left dorsal anterior cingulate cortex, caudate nucleus, nucleus accumbens, and insula — a brain network associated with reward, motivation, and emotion regulation — among the in-love group.
    The increase in connectivity in these brain regions “may be the result of frequent efforts [of in-love participants]to monitor their own emotional state, as well as their lovers’ emotional state, monitoring conflicts while adjusting cognitive strategies in order to resolve conflicts so as to maintain their romantic relationship,” the researchers explained.
    In addition, the in-love group showed increased connectivity between the temporoparietal junction, posterior cingulate, medial prefrontal cortex, precuneus, and inferior parietal lobe — a brain network associated with social cognition.
    “These results shed light on the underlying neurophysiological mechanisms of romantic love by investigating intrinsic brain activity, and demonstrate the possibility of applying a resting state approach for investigating romantic love,” the researchers concluded.

    0 0

    Want a Job in Silicon Valley After Yale? Good Luck With That

    One of the world's top universities in most respects, Yale has fallen way behind in computer science

    Max Nova, a technology entrepreneur, graduated from Yale University, as did his father and grandfather. Yet Nova couldn't convince his sister or twin brothers to accept offers of admission to his alma mater because of the school's weak computer science department.

    Nova's sister chose Harvard University and later found a job at Microsoft. His brothers are majoring in computer science at the Massachusetts Institute of Technology. “We're totally getting left in the dust by our peer institutions,” Nova says. “We're getting swamped.”
    Yale, one of the world's top universities in most respects, has fallen behind in computer science. It doesn't crack the highest tier of schools measured by the number of graduates in software companies or by salaries for majors in the discipline; it's struggling to educate throngs of students with a faculty about the same size as three decades ago; top students in the field are opting to enroll elsewhere; the head of its computer science department is publicly complaining; and undergraduates are circulating a petition in protest.

    “The best universities in the world are now judged by the quality of their computer science departments,” reads the petition, distributed this week and signed by more than 450 students. “We are distraught by the condition of Yale's.”

    Yale has long been known for its strength in the humanities. Literature scholars deconstructed texts in cloistered seminar rooms at the center of its Gothic campus, while the more quantitative-minded had to trek up “Science Hill” for their classes. Famed English literature professor Harold Bloom once told the Paris Review that he favored the ballpoint pen over the typewriter and “as far as I'm concerned, computers have as much to do with literature as space travel, perhaps much less.”

    Yale's computer science department has focused more on theory than practical applications, unlike Stanford University, known as the birthplace of Google, or Harvard, associated with Facebook and Microsoft.

    Though many of Yale's science Ph.D. programs such as biology, math, physics, and chemistry are top-ranked, Yale is struggling to adapt to a U.S. economy and educational system reordered by the ascendance of technology. With fewer students majoring in the humanities and a generation of graduates worried about getting good jobs, universities are scrambling to shift resources from traditional subjects into fields once scoffed at as vocational.

    “These are skills needed by anyone in the modern age,” says Jeannette Wing, who oversees research labs worldwide for Microsoft. All students should learn programming, even those studying such fields as archeology and English, she says.

    Yale President Peter Salovey, Provost Ben Polak and Tamar Gendler, dean of the Faculty of Arts and Sciences, say they understand the concerns and plan to address them. Gendler, who became dean in July, said in an e-mail that strengthening computer science is one of her major goals and that the school has made “significant progress” in the past few months because of faculty efforts and generous alumni.

    The administrators spent much of the last month planning a major announcement for March 26 that will boast substantial growth in faculty and “will make clear that our commitment to computer science is serious and substantial,” Gendler said.
    “We all agree that Yale needs a world-class computer science department in order to fulfill its core mission.”
    Yale has beefed up its curriculum with courses such as HackYale that focus on how computer science can be applied to Web startups. No one disputes the quality of Yale's faculty.
    “It's a fine smaller program,” says Ed Lazowska, a computer science professor at the University of Washington, one of the top-ranked programs in the country.
    Even Harvard, the world's richest school, is playing catch-up in computer science. Last year, alumnus Steve Ballmer, co-founder of Microsoft, contributed an estimated $60 million to increase Harvard's computer science faculty to 36, from 24. Princeton University is planning to add to its 30 professors to keep up with demand.
    Given its elite academic status, Yale's position is especially surprising. U.S. News & World Report ranks the university at No. 3, behind Princeton and Harvard, and it is one of the nation's most selective schools, admitting only 6.3 percent of its applicants last year. The Ivy League school has a $23.9 billion endowment, second only to Harvard's.
    Yet, starting next school year, the college will use archrival Harvard's famed introductory computer science course, CS50, because it doesn't have an equivalent course of its own. Though some Yale students view the move as humiliating, the computer science department considers it an innovative partnership.
    Since 2007, Yale's computer science department has consistently tied for 20th in U.S. News & World Report's ranking of computer science Ph.D. programs. For the past 10 years, Yale has never risen above 40th among recipients of U.S. National Science Foundation money for computer science research.
    Yale lags Cornell University, Stanford, Harvard, Princeton, and the University of Pennsylvania in U.S. computer science research funding. Yale pulled in $35 million in 2014, compared with the No. 1 school on the list, University of Illinois at Urbana-Champaign, with $170 million.
    As a senior at Choate Rosemary Hall, a private boarding school in Connecticut, Rachel Mellon snubbed Yale, just 14 miles away, for Stanford, near Palo Alto, Calif.
    “Once I visited Stanford, it was a no-brainer,” said Mellon, a junior. “You don't go to Yale to study computer science. You go to Stanford to study computer science.”
    Top college computer science graduates may get as much as $120,000 from tech leaders such as search giant Google and social networking leader Facebook, according to Kai Fortney, marketing director for Hired, a Web-based employment service for the industry. Some new hires may receive signing bonuses of as much as $25,000, he says.
    Computer science majors from the University of California-Berkeley have the highest average starting pay at $82,000 a year, according to's ranking last year. Yale wasn't in the top 20.
    Yale is absent from LinkedIn's list of top 25 U.S. universities for software developers, based on the percentage of employees from each school at premier companies. Carnegie Mellon University, California Institute of Technology, and Cornell University rank at the top.
    On Yale's campus in New Haven, Conn., these realities are especially stark. The computer science department inhabits a 120-year-old building named after Arthur K. Watson, a 1942 Yale graduate and IBM executive.
    Most recently renovated in 1986, its tiled interior looks more like a post office than a high-tech incubator. Graduate students share tiny offices in which some desks are pushed together face to face. On a recent afternoon, undergraduates packed every seat in a homework help session.
    Joan Feigenbaum, a computer science professor who became head of the department last year, has become a ringleader for those agitating for change. A Stanford Ph.D., she came to Yale from AT&T Labs and has been telling the administration for years that the computer-science faculty needs to expand.
    “I started complaining,” Feigenbaum says. “We have to hire more people. This is ridiculous.”
    Like any top computer scientist, Feigenbaum can rattle off data points to make her case. The number of students majoring in computer science, alone or combined with another subject: 181, almost four times as many as in 2010. Faculty positions: 20, only three more than 15 years ago.
    “We didn't hire all last year, we didn't try to hire all last year,” she says, “and Princeton made 13 offers.”
    In fact, Princeton's computer science department made 14 offers last year and hired five professors, according to Chairman Andrew Appel. To make matters worse, Yale is losing Bryan Ford, a researcher on system security and privacy who won tenure just last year. Feigenbaum calls Ford “brilliant” and “unbelievably productive.”
    Now he's headed to the École Polytechnique Fédérale de Lausanne in Switzerland. Ford says he's moving because of the financial package, access to research money, and a job offer there for his wife, a mathematician.
    “Yale has chosen not to invest in computer science that much,” says Willy Zwaenepoel, a professor heading up the Swiss school's faculty searches. “And they're paying the price for it.”
    Upset about Ford's departure, computer-science graduate students sent an open letter to the administration on Feb. 17, saying that failure to expand will “unequivocally” damage the department. The number of graduate students in the field has declined to 40 from 46 in 2001, in part because the size of the faculty limits their research options.
    “I’m a little biased, but I think computer science isn’t just a fad that’s going to go away tomorrow,” said Debayan Gupta, a Ph.D. student who helped write the letter.   
    Christine Hong, a senior, changed her major from political science to computer science in her sophomore year. Immediately, she had to begin planning for how to take the required courses, because so many are offered only one semester of the year.
    “I couldn’t take a databases course before I graduate because there’s only one teacher for it, and he wasn’t available,” says Hong, who signed the undergraduates’ petition and is headed for a job at Yahoo! at the end of July. “It’s really frustrating.”
    Hong was one of two Yale students among about 100 interns at social networking company LinkedIn last summer, she says. “I didn't meet any other Yale students in Silicon Valley,” she said.
    Two years ago, James Boyle, managing director of the Yale Entrepreneurial Institute, which helps students develop business plans, set up a summer tech boot camp for students interested in start-ups. There were 15 students the first year and 30 the second, with more students interested than could attend.
    Still, it won't be offered this year because of lack of funding, Boyle says. Boyle says he paid for the boot camp from his own budget; when he appealed to the administration for money—from $100,000 to $200,000—to keep the program going, none was forthcoming. In Silicon Valley, Yale alumni are hungry for graduates and are aware of the school's deficiencies, Boyle says.
    “I hear the complaint every trip I take to the Bay Area,” he says. “`Help me find people from Yale.'”
    On a lunch break, Feigenbaum, the computer science department head, takes a phone call about a promising faculty candidate, now at Microsoft. She's hoping for the best but knows competition will be intense from rivals. Yale's administrators must take action, and Feigenbaum expects they will, she says.
    “The faculty will have to grow,” she says. “I don't see any way around it.” 

    0 0

    Stop Distinguishing Between Execution and Strategy

    It’s impossible to have a good strategy poorly executed. That’s because execution actually isstrategy – trying to separate the two only leads to confusion.
    Consider the recent article, “Why Strategy Execution Unravels — and What to Do About It“ by Donald Sull, Rebecca Homkes, and Charles Sull, in the March 2015 issue of HBR.  Articles like this are well meaning and all set out to overcome the shortfalls of “execution.” But they all fail, including this one, and for the same reason: you can’t prescribe a fix for something that you can’t describe. And no one can describe “strategy execution” in a way that does not conflict with “strategy.”
    Blaming poor execution for the failure of your “brilliant” strategy is a part of what I’ve termed “The Execution Trap” — how “brilliant” can your strategy really be if it wasn’t implementable?
    And yet in virtually all writings about problems with “execution” the argument starts by positing that the problem with strategy is that it doesn’t get executed. That is, the authors create a clean logical distinction between “strategy” and “execution.” Then they go on to define “execution” as “strategy.”
    To illustrate, “Why Execution Unravels“ defines execution as follows:
    Strategy execution, as we define the term, consists of seizing opportunities that support the strategy while coordinating with other parts of the organization on an ongoing basis. When managers come up with creative solutions to unforeseen problems or run with unexpected opportunities, they are not undermining systematic implementation; they are demonstrating execution at its best.
    The problem is that seizing unexpected opportunities is essentially strategy — not execution. In other words, “Why Execution Unravels” effectively argues that the problem with strategy execution is strategy, which, of course, contradicts the idea that strategy and execution are two separate things.
    For me this produced a flashback to Larry Bossidy, Ram Charan, and Charles Burck’s book Execution: The Discipline of Getting Things Done, published back in 2002After spending the first 21 pages explaining that “Most often today the difference between a company and its competitor is the ability to execute” and “Strategies most often fail because they aren’t executed well,” the authors provide this definition of “execution”:
    The heart of execution lies in the three core processes: the people process, the strategy process, and the operations process. [Emphasis added]
    This defines the whole of strategy as one of the three core pieces of execution! To the authors, execution is strategy mere pages after execution is completely different from strategy.
    Execution writers fall into this trap because they want to make a distinction between strategy as deciding what to do and execution as doing the thing that strategists decided. But that begs the thorny question, especially in the modern knowledge economy, what exactly does that “doing” activity entail? If it is to be distinct from the antecedent activity of “deciding what to do,” then we pretty much have to eliminate the “deciding” part from “doing.” So to create the desired distinction, we would have to define execution as choice-less doing. There are no choices to be made: just do it (whatever “it” happens to be).
    But most people, including the authors of the article and book above, would agree that “doing” clearly includes some “choosing.” So in the end, the only logic that can be supported by what really happens in organizations is that every person in the organization engages in the same activity: making choices under uncertainty and competition.
    Calling some choices “strategy” and some “execution” is a fruitless distinction. In fact, it is worse than fruitless; it is the source of the observed problems with “execution.”  So if organizations experience “bad execution” it is not because they are bad at the discipline of execution. It is because they call it execution

    0 0

    Where is the best place in the world to be a working woman?

    IN SOME countries International Women's Day on March 8th is a public holiday. But it is too early to relax efforts to increase equality for working women. 

    The Nordics are still out in front, according to our latest glass-ceiling index, which shows where women have the best chances of equal treatment at work. It combines data on higher education, labour-force participation, pay, child-care costs, maternity rights, business-school applications and representation in senior jobs. 

    Each country’s score is a weighted average of its performance on nine indicators.

    This year it is Finland that comes out best, overtaking Sweden and knocking Norway off the top spot. It scores highest of the 28 countries in our index for the share of women in higher education (where their lead over males has grown), female labour-force participation and women taking the GMAT (business-school entrance exam), now over 50%. Finland has also increased its paid maternity leave by more than two weeks. Norway still has more women on company boards than other countries, thanks to a 40% mandatory quota that came into effect in 2008, but women's share of senior management jobs is slightly down on last year. While the share of parliamentary seats occupied by women in Norway and Finland has not changed, it fell slightly in Sweden, where the gender pay gap has also widened, and is now closer to the OECD average.

    A newcomer to the index is Turkey, which is among the worst places in the OECD to be a working woman. It has the lowest share of senior management (just 10%) and the largest gap between male and female labour-force participation. 

    In South Korea and Japan, too, the gaps in labour-force participation and pay remain unusually wide, though South Korea scores top for net child-care costs, thanks to generous subsidies. 

    New Zealand has dropped down the ranks since last year, largely because net child-care costs have increased. While Germany has been doing better (or no worse) on all indicators except the number of women taking the GMAT exam, around a third of all candidates.

    The OECD average shows improvements in the share of women in higher education, on boards and in parliament, as well as in their labour-force participation. 

    But the pay gap between men and women has widened, there are fewer women in senior management and the average maternity leave has come down. The glass ceiling may be cracking, but has by no means shattered.

    View at the original source

older | 1 | .... | 54 | 55 | (Page 56) | 57 | 58 | .... | 82 | newer