Quantcast
Channel: career
Viewing all 1643 articles
Browse latest View live

How-to: Tune Your Apache Spark Jobs (Part 1) 04-28

$
0
0

How-to: Tune Your Apache Spark Jobs (Part 1)


Learn techniques for tuning your Apache Spark jobs for optimal efficiency.

When you write Apache Spark code and page through the public APIs, you come across words like transformation,action, and RDD. Understanding Spark at this level is vital for writing Spark programs. Similarly, when things start to fail, or when you venture into the web UI to try to understand why your application is taking so long, you’re confronted with a new vocabulary of words like jobstage, and task. Understanding Spark at this level is vital for writing goodSpark programs, and of course by good, I mean fast. To write a Spark program that will execute efficiently, it is very, very helpful to understand Spark’s underlying execution model.
In this post, you’ll learn the basics of how Spark programs are actually executed on a cluster. Then, you’ll get some practical recommendations about what Spark’s execution model means for writing efficient programs.

How Spark Executes Your Program


A Spark application consists of a single driver process and a set of executor processes scattered across nodes on the cluster.

The driver is the process that is in charge of the high-level control flow of work that needs to be done. The executor processes are responsible for executing this work, in the form of tasks, as well as for storing any data that the user chooses to cache. Both the driver and the executors typically stick around for the entire time the application is running, although dynamic resource allocation changes that for the latter. A single executor has a number of slots for running tasks, and will run many concurrently throughout its lifetime. Deploying these processes on the cluster is up to the cluster manager in use (YARN, Mesos, or Spark Standalone), but the driver and executor themselves exist in every Spark application.

At the top of the execution hierarchy are jobs. Invoking an action inside a Spark application triggers the launch of a Spark job to fulfill it. To decide what this job looks like, Spark examines the graph of RDDs on which that action depends and formulates an execution plan. This plan starts with the farthest-back RDDs—that is, those that depend on no other RDDs or reference already-cached data–and culminates in the final RDD required to produce the action’s results.
The execution plan consists of assembling the job’s transformations into stages. A stage corresponds to a collection of tasks that all execute the same code, each on a different subset of the data. Each stage contains a sequence of transformations that can be completed without shuffling the full data.
What determines whether data needs to be shuffled? Recall that an RDD comprises a fixed number of partitions, each of which comprises a number of records. For the RDDs returned by so-called narrow transformations like map and filter, the records required to compute the records in a single partition reside in a single partition in the parent RDD. Each object is only dependent on a single object in the parent. Operations like coalesce can result in a task processing multiple input partitions, but the transformation is still considered narrow because the input records used to compute any single output record can still only reside in a limited subset of the partitions.
However, Spark also supports transformations with wide dependencies such as groupByKey and reduceByKey. In these dependencies, the data required to compute the records in a single partition may reside in many partitions of the parent RDD. All of the tuples with the same key must end up in the same partition, processed by the same task. To satisfy these operations, Spark must execute a shuffle, which transfers data around the cluster and results in a new stage with a new set of partitions.
For example, consider the following code:

It executes a single action, which depends on a sequence of transformations on an RDD derived from a text file. This code would execute in a single stage, because none of the outputs of these three operations depend on data that can come from different partitions than their inputs.
In contrast, this code finds how many times each character appears in all the words that appear more than 1,000 times in a text file.

This process would break down into three stages. The reduceByKey operations result in stage boundaries, because computing their outputs requires repartitioning the data by keys.
Here is a more complicated transformation graph including a join transformation with multiple dependencies.
The pink boxes show the resulting stage graph used to execute it.
At each stage boundary, data is written to disk by tasks in the parent stages and then fetched over the network by tasks in the child stage. Because they incur heavy disk and network I/O, stage boundaries can be expensive and should be avoided when possible. The number of data partitions in the parent stage may be different than the number of partitions in the child stage. Transformations that may trigger a stage boundary typically accept a numPartitionsargument that determines how many partitions to split the data into in the child stage.
Just as the number of reducers is an important parameter in tuning MapReduce jobs, tuning the number of partitions at stage boundaries can often make or break an application’s performance. We’ll delve deeper into how to tune this number in a later section.

Picking the Right Operators


When trying to accomplish something with Spark, a developer can usually choose from many arrangements of actions and transformations that will produce the same results. However, not all these arrangements will result in the same performance: avoiding common pitfalls and picking the right arrangement can make a world of difference in an application’s performance. A few rules and insights will help you orient yourself when these choices come up.
Recent work in SPARK-5097 began stabilizing SchemaRDD, which will open up Spark’s Catalyst optimizer to programmers using Spark’s core APIs, allowing Spark to make some higher-level choices about which operators to use. When SchemaRDD becomes a stable component, users will be shielded from needing to make some of these decisions.
The primary goal when choosing an arrangement of operators is to reduce the number of shuffles and the amount of data shuffled. This is because shuffles are fairly expensive operations; all shuffle data must be written to disk and then transferred over the network. repartition , joincogroup, and any of the *By or *ByKey transformations can result in shuffles. Not all these operations are equal, however, and a few of the most common performance pitfalls for novice Spark developers arise from picking the wrong one:

  • Avoid groupByKey when performing an associative reductive operation. For example,rdd.groupByKey().mapValues(_.sum) will produce the same results as rdd.reduceByKey(_ + _). However, the former will transfer the entire dataset across the network, while the latter will compute local sums for each key in each partition and combine those local sums into larger sums after shuffling.


  • Avoid reduceByKey When the input and output value types are different. For example, consider writing a transformation that finds all the unique strings corresponding to each key. One way would be to use map to transform each element into a Set and then combine the Sets with reduceByKey:
    This code results in tons of unnecessary object creation because a new set must be allocated for each record. It’s better to use aggregateByKey, which performs the map-side aggregation more efficiently:
  • Avoid the flatMap-join-groupBy pattern. When two datasets are already grouped by key and you want to join them and keep them grouped, you can just use cogroup. That avoids all the overhead associated with unpacking and repacking the groups.

When Shuffles Don’t Happen

It’s also useful to be aware of the cases in which the above transformations will not result in shuffles. Spark knows to avoid a shuffle when a previous transformation has already partitioned the data according to the same partitioner. Consider the following flow:

Because no partitioner is passed to reduceByKey, the default partitioner will be used, resulting in rdd1 and rdd2 both hash-partitioned. These two reduceByKeys will result in two shuffles. If the RDDs have the same number of partitions, the join will require no additional shuffling. Because the RDDs are partitioned identically, the set of keys in any single partition of rdd1 can only show up in a single partition of rdd2. Therefore, the contents of any single output partition of rdd3 will depend only on the contents of a single partition in rdd1 and single partition in rdd2, and a third shuffle is not required.
For example, if someRdd has four partitions, someOtherRdd has two partitions, and both the reduceByKeys use three partitions, the set of tasks that execute would look like:
What if rdd1 and rdd2 use different partitioners or use the default (hash) partitioner with different numbers partitions?  In that case, only one of the rdds (the one with the fewer number of partitions) will need to be reshuffled for the join.
Same transformations, same inputs, different number of partitions:
One way to avoid shuffles when joining two datasets is to take advantage of broadcast variables. When one of the datasets is small enough to fit in memory in a single executor, it can be loaded into a hash table on the driver and then broadcast to every executor. A map transformation can then reference the hash table to do lookups.

When More Shuffles are Better

There is an occasional exception to the rule of minimizing the number of shuffles. An extra shuffle can be advantageous to performance when it increases parallelism. For example, if your data arrives in a few large unsplittable files, the partitioning dictated by the InputFormat might place large numbers of records in each partition, while not generating enough partitions to take advantage of all the available cores. In this case, invoking repartition with a high number of partitions (which will trigger a shuffle) after loading the data will allow the operations that come after it to leverage more of the cluster’s CPU.
Another instance of this exception can arise when using the reduce or aggregate action to aggregate data into the driver. When aggregating over a high number of partitions, the computation can quickly become bottlenecked on a single thread in the driver merging all the results together. To loosen the load on the driver, one can first usereduceByKey or aggregateByKey to carry out a round of distributed aggregation that divides the dataset into a smaller number of partitions. The values within each partition are merged with each other in parallel, before sending their results to the driver for a final round of aggregation. Take a look at treeReduce and treeAggregate for examples of how to do that. (Note that in 1.2, the most recent version at the time of this writing, these are marked as developer APIs, but SPARK-5430 seeks to add stable versions of them in core.)
This trick is especially useful when the aggregation is already grouped by a key. For example, consider an app that wants to count the occurrences of each word in a corpus and pull the results into the driver as a map.  One approach, which can be accomplished with the aggregate action, is to compute a local map at each partition and then merge the maps at the driver. The alternative approach, which can be accomplished with aggregateByKey, is to perform the count in a fully distributed way, and then simply collectAsMap the results to the driver.

Secondary Sort

Another important capability to be aware of is 
the repartitionAndSortWithinPartitions transformation. It’s a transformation that sounds arcane, but seems to come up in all sorts of strange situations. This transformation pushes sorting down into the shuffle machinery, where large amounts of data can be spilled efficiently and sorting can be combined with other operations.
For example, Apache Hive on Spark uses this transformation inside its join implementation. It also acts as a vital building block in the secondary sort pattern, in which you want to both group records by key and then, when iterating over the values that correspond to a key, have them show up in a particular order. This issue comes up in algorithms that need to group events by user and then analyze the events for each user based on the order they occurred in time. 
Taking advantage of repartitionAndSortWithinPartitions to do secondary sort currently requires a bit of legwork on the part of the user, but SPARK-3655 will simplify things vastly.

Conclusion

You should now have a good understanding of the basic factors in involved in creating a performance-efficient Spark program! In Part 2, we’ll cover tuning resource requests, parallelism, and data structures.


How to Detox Your Lungs Naturally 05-03

$
0
0

WikiHow is a community of voluntary contributors who contribute simple do it yourself "How to" articles. There are at present more than 190,000 or more articles.  Shyam has been associated with wikiHow since April 2010. He has contributed 23 articles out of which 8 have been featured with a total page views exceeding 7,91,000. Shyam has also edited about 423 articles contributed by others. You can view Shyam articles Here



Detox Your Lungs Naturally Step 2.jpg







How to Detox Your Lungs Naturally

Two Methods:Using Verified MethodsUsing Unverified Methods

Keeping yourself detoxified is one of the best things you can do to stay healthy. Your lungs are some of your most important organs, and most human beings can live for mere minutes without air. Therefore, it is important to keep your lungs healthy to ensure that they can perform at their best throughout your life. Although you have little control over the air you breathe, you can take steps to detox your lungs using verified methods that are backed up by science, and unverified methods rooted in naturopathic healing and folk medicine.

Method 1 of 2: Using Verified Methods


Detox Your Lungs Naturally Step 2.jpg


1. Cook with oregano to reduce inflammation and congestion. Oregano's primary benefits are due to its carvacrol and rosmarinic acid content. Both compounds are natural decongestants and histamine reducers that have direct, positive benefits on the respiratory tract and nasal passage airflow.[1]

The volatile oils in oregano, thymol and carvacol, have shown to inhibit growth of bacteria like staphylococcus aureus and pseudomonas aeruginosa.

Oregano can be used in cooking in its dried or fresh forms.

A few drops of oregano oil in milk or juice can be taken once a day for as long as you want to receive health benefits.

2. Inhale lobelia to relax your lungs and break up congestion. Lobelia contains an alkaloid known as lobeline, which thins mucus and breaks up congestion.[2]

Additionally, lobelia stimulates the adrenal glands to release epinephrine, relaxing the airways and allowing for easier breathing.

Also, because lobelia helps to relax smooth muscles, it is included in many cough and cold remedies.

Extracts of lobelia inflata contain lobeline, which showed positive effects in the treatment of multidrug-resistant tumor cells.

You may add 5-10 leaves of lobelia and vaporize them for inhalation. Inhale the vapors for 10 minutes each day, morning and evening.


Detox Your Lungs Naturally Step 3.jpg

3. Steam treat yourself with eucalyptus to take advantage of its expectorant properties. Eucalyptus is a common ingredient in cough lozenges and syrups and its effectiveness is due to an expectorant compound called cineole, which can ease a cough, fight congestion, and soothe irritated sinus passages.

Detox Your Lungs Naturally Step 4.jpg


As an added bonus, because eucalyptus contains antioxidants, it supports the immune system during a cold or other illness.

You may add a few drops of eucalyptus oil into hot water and do a steam inhalation for 15 minutes each day to cleanse the lungs.[3]

4. Take mullein to clear mucus and cleanse the bronchial tubes. Both the flowers and the leaves of the mullein plant are used to make an herbal extract that helps strengthen the lungs.

Mullein is used by herbal practitioners to clear excess mucus from the lungs, cleanse the bronchial tubes, and reduce inflammation present in the respiratory tract.

You can make a tea can from one teaspoon of the dried herb and one cup of boiled water.
4].Alternatively, you can take a tincture form of this herb.

Detox Your Lungs Naturally Step 5.jpg

5.Use peppermint to soothe your respiratory muscles. Peppermint and peppermint oil contain menthol, a soothing ingredient known to relax the smooth muscles of the respiratory tract and promote free breathing.

Paired with the antihistamine effect of peppermint, menthol is a fantastic decongestant.[5]
Many people use therapeutic chest balms and other inhalants that contain menthol to help break up congestion.

Additionally, peppermint is an antioxidant and fights harmful organisms.

You may chew on 3-5 peppermint leaves each day to enjoy its anti-histaminic benefits.

Detox Your Lungs Naturally Step 6.jpg


6. Drink an infusion of elecampane to reap its soothing and expectorant benefits. The root of the elecampane plant helps kill harmful bacteria, lessens coughs, and expels excess mucus.[6]

Elecampane contains inulin, a phytochemical that coats and soothes the lining of the bronchial passages and acts as an expectorant in the body.

In the respiratory system, it gradually relieves any fever that might be present while battling infection and maximizing the excretion of toxins through perspiration.

If you have a tickling cough or bronchitis, elecampane may be able to help.
Because of its action on excess mucus and toxins in the respiratory tract, it is often helpful with emphysema, asthma, bronchial asthma, and tuberculosis.

You can use one teaspoon of herb per cup of water in an infusion, or one-half to one teaspoon of tincture, three times a day for about 3 months.


Detox Your Lungs Naturally Step 7.jpg

7. Take hot showers to clear your lungs. Taking a shower with hot water for twenty minutes can be really helpful in clearing out your lungs.

If you can sit in a sauna, the hot air will be even more effective in clearing your lungs.
It is very important to allow your body to get rid of toxins through sweating.

A sauna or hot water increases the secretion of sweat, and helps the lungs rid themselves of toxic substances.

8. Stop smoking to protect your lungs from toxins. Smoking tobacco is a great way to introduce a variety of toxins directly into your lungs.

Tobacco smoke, nicotine, and the variety of other unhealthy substances found in cigarettes wreak havoc on your respiratory tract.

In addition to lowering your lung capacity, smoking puts you at risk for cancer and other long-term health complications.

Detox Your Lungs Naturally Step 9.jpg


9. Stay away from common toxic products. Eliminate household toxins that are part of detergents, cleansers, bleaches, and chemically scented air fresheners that have strong fragrances and might harm the lungs.

Pesticides must go as well, and there are alternatives that aren't toxic for humans.
All toxic commercial pesticides emit caustic gases or vapors that irritate the lungs.
Simply get some nice indoor plants that add life to your dwelling while removing toxins.

Method 2 of 2: Using Unverified Methods

Detox Your Lungs Naturally Step 10.jpg

1. Drink sage tea to dispel lung disorders. Sage’s textured leaves give off a heady aroma, which arises from sage’s essential oils. These oils are the source of the many benefits of sage tea for lung problems and common respiratory ailments.

Sage tea is a traditional treatment for sore throats and coughs.

The rich aromatic properties arising from sage’s volatile oils of thujone, camphor, terpene and salvene can be put to use by inhaling sage tea’s vapors to dispel lung disorders and sinusitis.

Alternatively, brew a strong pot of sage tea and place it into a bowl or a vaporizer.
Inhale the vapors for about 5-10 minutes 2-3 times a day, or for as long as you wish, since it is healthy and perfectly safe.


Detox Your Lungs Naturally Step 11.jpg


2. Eat boiled plantain leaf to soothe irritated mucous membranes. With fruit that is similar in appearance to a banana, plantain leaf has been used for hundreds of years to ease coughs and soothe irritated mucous membranes.

Many of its active ingredients show antibacterial and antimicrobial properties, as well as being anti-inflammatory and antitoxic.

Plantain leaf has an added bonus in that it may help relieve a dry cough by spawning mucus production in the lungs.

One may eat a boiled plantain fruit or sip on a decoction of 1-2 brewed plantain leaves.
You may continue this each day for about 2-3 months to take advantage of its healing benefits on the lungs.

Detox Your Lungs Naturally Step 12.jpg


3. Drink licorice root tea to clear out mucous in the lungs. Licorice is one of the most widely consumed herbs in the world to eliminate toxins from the lungs. Licorice is very soothing, and softens your mucous membranes in the throat, lungs, and stomach.

It reduces the irritation in the throat and has an expectorant action (loosening phlegm to be expelled).

It loosens the phlegm in the respiratory tract, so that the lungs can expel the mucus.
It also has antibacterial and antiviral effects which help fight off viral and bacterial strains in your body that can cause lung infections.

You can use one teaspoon of licorice root per cup of water in an infusion, or one teaspoon of tincture, 3 times a day.

Detox Your Lungs Naturally Step 13.jpg


4. Vaporize cannabis to open up your airways and sinuses. If it's legal in your area, use vaporized cannabis for about 5 minutes each day to open up your airways and sinuses.

Vaporizing cannabis mitigates the irritation to the oral cavity that comes from smoking.
Cannabis is perhaps one of the most effective anti-cancer plants in the world.

It also stimulates your body’s natural immune response and significantly reduces the ability of infections to spread.

Cannabis has even been shown to treat and reverse asthma.

Detox Your Lungs Naturally Step 14.jpg


5. Drink watercress (Nasturtium Officinale) tea to eliminate toxins. Watercress has the ability to eliminate the toxins from tobacco and decrease the chance of these toxins resulting in lung cancer.

This ability is due to an active ingredient that acts on a series of enzymes, preventing the development of cancer cells.

Watercress is used to make a simple and delicious soup, which efficiently cleanses the lungs of toxins.

It is recommended that you consume this soup twice a month, especially if you are an active or a passive smoker.

Watercress soup:

1 kg of watercress (flowers and stems)
2 cups of dates
4 cups of water

Put all ingredients in a pot over a low flame. When it boils, reduce the heat and allow it to simmer for a minimum of four hours. If the foam starts forming at the surface of the soup, remove it with a spoon. Once the soup is ready, season it according to your taste.

Note: It is very important to use the correct ratio of ingredients and cook the soup for a minimum of four hours. After such a long cooking time the soup becomes tasty, nutritious, and effective in detoxifying the lungs.

Detox Your Lungs Naturally Step 15.jpg


6.Try ginger to prevent lung cancer. Ginger is a powerful tool for detoxification of the lungs and prevention of lung cancer.

You can use it in many ways including ginger root tea mixed with lemon, which facilitates breathing and promotes the elimination of toxins from the respiratory tract.

You can also create a warm bath with powdered ginger. The bath should last at least twenty minutes.

The ginger bath opens pores and stimulates sweating, which helps eliminate toxins.
The steam you inhale goes directly into the airways and eases the process of purifying the lungs.

With every meal, you may eat a tiny piece of ginger.

This will improve your digestion and will contribute to the process of cleansing the body.

7. Use castor oil packs to draw toxins out of the body. Castor oil packs are easy to make at home and are great for drawing toxins out of the body. Castor oil has long been appreciated as a general health tonic and is believed to stimulate circulation and waste elimination.

Castor oil packs can be placed on your chest, perhaps similar to a vapor rub, and can break up congestion and toxins.

While the packs are not expensive to make, it is essential to use only organic, cold-pressed castor oil.

By using cold-pressed oil, you can be reasonably certain it will contain the vital compounds such as phytonutrients, undecylenic acid, and especially ricinoleic acid that are beneficial to the body.

Carefully warm about 8 oz. of castor oil in a pot on the stove to a comfortable temperature and then soak 12″x6″ strips of cloth in the oil.

Being careful not to spill the oil, take the pot with you to where you plan to lie down. Use a small piece of plastic like a “glove” to handle the packs.

Lie down on a plastic sheet, and then lay 3-4 strips over your chest and sides covering the lung areas. Do this on the right and left sides.

Then, cover the packs with a larger section of plastic and lay your heating pad over the plastic covered castor oil packs. Keep it there for 1-2 hours.

Alternate the heating pad from right to left sides.

It is believed that this helps by break up and draw out stored toxins and congestion from the lungs.

8. Take an osha root extract to increase circulation to the lungs. Osha roots contain camphor and other compounds that make it one of the best lung-supporting herbs. One of the main benefits of osha root is that it helps increase circulation to the lungs, which makes it easier to take deep breaths.

Also, when seasonal allergies inflame your sinuses, osha root can produce a similar effect to antihistamines and may help calm respiratory irritation.

An infusion prepared with the roots of osha can be taken orally to cure a number of medical conditions.

In addition, fresh liquid extract of the herb's roots can also be used internally.
The standard dose of the infusion prepared with osha root is one or two teaspoonful of cut and crushed, freshly obtained root infused for approximately 25 minutes.

If you are taking the root in a liquid extract, ensure that its strength is at a ratio of 1:1:8.
Take 20 to 60 drops of this root liquid extract once to four times every day.

9. Drink a lungwort tea to relieve various lung conditions. Lungwort is a tree-growing lichen that actually resembles lung tissue in appearance, and hence is used for various lung conditions.

Lungwort clears tar from the upper respiratory tract, nose, throat, and upper bronchial tubes, while helping the body soothe the mucous membranes in these regions.

It also has an anti-inflammatory action and is good for bronchitis.

As an infusion, mix one to two teaspoons of dried herb per cup and drink one cup three times a day.

10. Maintain a healthy diet to detox your whole body. Like all other types of detoxification, lung cleansing necessitates dietary changes.

A healthy diet is important because it stimulates the natural cleansing mechanisms of the body and strengthens the immune system.

During cleansing, it is recommended that you consume more water, fruits and vegetables.
Drink a cup of lemon juice before breakfast; lemon helps the lungs renew themselves with its high vitamin C content and is easy to digest.

Drink a glass of grapefruit juice because it contains natural anti-antioxidants and enhances the detoxing of your circulatory system.

Have a cup of carrot juice in the period between breakfast and lunch. Carrots contain vitamins A and C, which help to clean the respiratory system and boost immunity.

11. Consume a good amount of potassium. Potassium is one of the most detoxifying nutrients, especially when taken in liquid form.

To prepare a cup of juice rich in potassium, place some carrots, celery, spinach, parsley, and green algae in a blender.

12. Eat spicy foods to break down excess mucus. Chilies help break down excess mucus in the lungs and the body in general.

That is why when you eat spicy foods, you can immediately feel your nose beginning to run.

In the same way, spicy foods affect the excess mucus and tar in the lungs, helping your body eliminate them more easily.

13.Drink water to stay hydrated. Plain water is the best thing to drink while you are detoxing. Good hydration is key to good health and speeds up the process of detoxification.

Try to avoid sodas, coffee, and alcohol.

14. Do breathing exercises to facilitate clear lungs. Breathing exercises are one of the best ways to cleanse your lungs. There are many types of exercises for detoxifying your lungs. 

Try this one:

Take a standing position. Be relaxed. Keep your arms at your sides and your feet slightly apart. Take a few deep breaths and exhale through the nose.

Now, breathe in through your nose and exhale slowly through your mouth deeply until you cannot exhale anymore.

But don't stop here, because there is still air left in your lungs. Some air always remains in the lungs and is not replaced with fresh air as we breathe.

Now, force your diaphragm to exhale all the air from your lungs with a wheezing sound.
Do this several times, exhaling through your mouth with a deep puff until you feel there is no more air in the lungs. At this point you will notice that you have pulled in your belly toward the spine.

Through your nose, slowly inhale fresh, clean air into your empty lungs.

Fill your lungs with fresh air, and then hold your breath for five seconds, counting them slowly.

Repeat the process to expel the remaining air out of the lungs. Repeat as many times as you like but at least 50 times each day.

Besides purifying the lungs, this exercise has another benefit: your stomach muscles will eventually become strong and taut.

Please read the original wikiHow article Here

It’s Not a ‘Stream’ of Consciousness 05-11

$
0
0

It’s Not a ‘Stream’ of Consciousness







































IN 1890, the American psychologist William James famously likened our conscious experience to the flow of a stream. “A ‘river’ or a ‘stream’ are the metaphors by which it is most naturally described,” he wrote. “In talking of it hereafter, let’s call it the stream of thought, consciousness, or subjective life.”

While there is no disputing the aptness of this metaphor in capturing our subjective experience of the world, recent research has shown that the “stream” of consciousness is, in fact, an illusion. We actually perceive the world in rhythmic pulses rather than as a continuous flow.

Some of the first hints of this new understanding came as early as the 1920s, when physiologists discovered brain waves: rhythmic electrical currents measurable on the surface of the scalp by means of electroencephalography. Subsequent research cataloged a spectrum of such rhythms (alpha waves, delta waves and so on) that correlated with various mental states, such as calm alertness and deep sleep.

Researchers also found that the properties of these rhythms varied with perceptual or cognitive events. The phase and amplitude of your brain waves, for example, might change if you saw or heard something, or if you increased your concentration on something, or if you shifted your attention.

But those early discoveries themselves did not change scientific thinking about the stream-like nature of conscious perception. Instead, brain waves were largely viewed as a tool for indexing mental experience, much like the waves that a ship generates in the water can be used to index the ship’s size and motion (e.g., the bigger the waves, the bigger the ship).

Recently, however, scientists have flipped this thinking on its head. We are exploring the possibility that brain rhythms are not merely a reflection of mental activity but a cause of it, helping shape perception, movement, memory and even consciousness itself.

What this means is that the brain samples the world in rhythmic pulses, perhaps even discrete time chunks, much like the individual frames of a movie. From the brain’s perspective, experience is not continuous but quantized.

Another clue that led to this discovery was the so-called wagon-wheel illusion, in which the spokes on a wheel are sometimes perceived to reverse the direction of their rotation. This illusion is easy to induce with a strobe light if the rotation of the wheel is such that each strobe flash captures the spoke location slightly behind the location captured on the previous flash, leading to the perception of reverse motion. The illusion results from “sampling” the scene in discrete frames or time chunks.

The telling fact, for perceptual scientists, is that this illusion can also occur during normal observation of a rotating wheel, in full daylight. This suggests that the brain itself, even in the absence of a strobe light, is sampling the world in discrete chunks.

Scientists have uncovered still more clues. It turns out, for example, that our ability to detect a subtle event, like a slight change in a visual scene, oscillates over time, cycling between better and worse perceptual sensitivity several times a second. Research shows that these rhythms correlate with electrical rhythms of the brain.


Consider a study that I conducted with my colleagues, forthcoming in the journal Psychological Science. We presented listeners with a three-beat-per-second rhythm (a pulsing “whoosh” sound) for only a few seconds and then asked the listeners to try to detect a faint tone immediately afterward. The tone was presented at a range of delays between zero and 1.4 seconds after the rhythm ended. Not only did we find that the ability to detect the tone varied over time by up to 25 percent — that’s a lot — but it did so precisely in sync with the previously heard three-beat-per-second rhythm.

Why would the brain do this? One theory is that it’s the brain’s way of focusing attention. Picture a noisy cafe filled with voices, clanging dishes and background music. As you attend to one particular acoustic stream — say, your lunch mate’s voice — your brain synchronizes its rhythm to the rhythm of the voice and enhances the perceptibility of that stream, while suppressing other streams, which have their own, different rhythms. (More broadly, this kind of synchronization has been proposed as a mechanism for communication between neural networks within the brain.)

All of this points to the need for a new metaphor. We should talk of the “rhythm” of thought, of perception, of consciousness. Conceptualizing our mental experience this way is not only more accurate, but it also situates our mind within the broader context of the daily, monthly and yearly rhythms that dominate our lives.

View at the original source

Are You Ready for Personalized Predictive Analytics? 05-11

$
0
0

Are You Ready for Personalized Predictive Analytics?




Predictive analytics have the potential power to "produce remarkable services and longer lives," says James Heskett. But can businesses make bets in this area without first understanding the social consequences? What do YOU think?

In 2002, the film Minority Report introduced many of us to the world of predictive analytics. In it, an innovative technology allows Washington, D.C. to go without a murder for six years by helping Tom Cruise, chief of the Precrime Unit, to identify, arrest, and prosecute killers before they commit their crimes.
This was a case of the movies catching up to the business world. At that time, predictive analytics had been applied to the continuing maintenance of everything from CAT scan machines produced by GE to elevators made by Otis. It enabled these firms to sell "up time" rather than just products, thanks to a number of sensors and the continuing remote surveillance of the performance of these products.
Predictive analysis applied to humans is now one of the hottest concepts to come along. It is being made possible by a system of customer loyalty programs, big data, and cloud computing that enables the continuous collection, storage, combination, and analysis of data about each of us from a number of disparate sources. Pretty exciting, no?
Some years ago, we heard the story about the GE maintenance engineer who, based on information from the firm's advanced monitoring and predictive analytics, visited one of his hospital accounts to repair a CAT scan machine that had not yet failed. As he was confronted by puzzled hospital administrators, the machine indeed stopped functioning. More recently, many of us have heard the story about the Target customer who was sent information about products of interest to pregnant women before she knew she was pregnant. Target's Big Data analysis of hers' and others' purchases, combined with related information, had placed her in a cohort with other women known to be pregnant.
Predictive analytics will be essential to the development of concepts such as 30-minute package delivery that companies like Amazon have been contemplating. For years, logistics have been managed by principles such as that of "postponement and speculation." The idea is that to approach the best match between supply and demand at a reasonable cost, a supplier has two basic choices. One is to delay (postpone) committing inventory to a particular supply point for as long as possible through such things as careful forecasting of demand, rapid manufacture, and fast transport. The other is to invest (speculate) in long but economical production batches, slow but economical transportation, and large amounts of inventory that ensure an in-stock position when an order is received.
An argument can be made that any forecast and inventory is based on predictive analytics. But in the past, these analytics were applied to data that described behaviors of large groups of decision-makers. By contrast, tomorrow's version of this technique will be based on the analysis of massive files of individual profiles, from which predictions will be built that establish stock levels needed to support 30-minute deliveries. Personalized logistics will take a lot more than just drones.
Predictive analytics have the potential to produce remarkable services and longer lives. But before we become too enamored with them, it's important to remember what happened to Tom Cruise in the movie. He is eventually accused on a precrime basis of murder, with only 36 hours to determine whether the charge is accurate and, if not, who implicated him wrongly.
How important are these concepts to our future? Is this a big deal or just another buzz term in business for the next several years? Are you ready for predictive analytics applied to you? If not, what are you going to do about it? What do you think?

When One Business Model Isn’t Enough 05-31

$
0
0

When One Business Model Isn’t Enough




Trying to operate more than one business model at a time is devilishly difficult—and frequently cited as a leading cause of strategic failure. Yet situations abound where a company may wish or need to address several customer segments, using a particular business model for each one. To crowd out competitors or forestall potential disruptors in its current markets, to expand into new markets, to make more efficient use of fixed assets and other resources, or to develop new income streams may all ideally require distinct business models that operate in tandem.

IBM and Compaq, for instance, supplemented their reseller distribution model with a direct-sell model to counteract Dell’s growth in the 1990s. Netflix runs two business models for its DVD-by-mail and its streaming-video services. In emerging markets a bank sometimes creates a separate company to offer credit to low- and middle-income customers, as Banco Santander-Chile has done with Banefe. The forestry company Celulosa Arauco turns its trees into paper pulp under one business model and into wood panels for high-end furniture under another.

Nowhere have the perils of running tandem business models been more evident than in the airline industry, where so many full-service carriers have met with so little success in introducing no-frills offerings to compete with low-cost competitors such as EasyJet and Southwest. Witness what happened to British Airways’ Go Fly, Continental Lite, KLM’s Buzz, and Delta’s Song.

That’s what makes the case of LAN Airlines, which successfully operates three business models at once, so remarkable. The Chilean carrier has thrived by integrating a full-service international passenger-airline business model with an air-cargo business model while separately operating a no-frills passenger model for domestic flights. In fact, the word “thrived” is too modest: From 1993 to 2010, LAN posted 17% compound annual revenue growth through good times and bad (from $318 million in 1993 to $4.2 billion in 2010), while steadily raising annual net profits from zero to $420 million. 

LAN’s market capitalization, at $8.9 billion as of March 11, 2011, exceeds that of most of its main global rivals—US Airways ($1.5 billion), American Airlines ($2.2 billion), Korean Air ($3.7 billion), British Airways ($6.9 billion), and United-Continental ($8.1 billion). It even tops that of upstart Ryanair ($6.9 billion) and every other Latin American airline. From 1998 to 2010 LAN’s share price, adjusted by dividends and splits, has grown by more than 1,500%.

LAN Airlines has succeeded where its rivals have not through a more subtle appreciation of the way different business models relate to one another. Certainly, many business models conflict, as in Netflix’s high-profile case. Others, like the models for digital and film photography, are clear substitutes for each other. No doubt such models should be operated separately, and perhaps, only sequentially.

As LAN Airlines’ experience makes clear, however, other business models are complementary. Indeed, they may be so mutually reinforcing that together they turn otherwise unviable possibilities into profitable opportunities. A company that recognizes which models are substitutes that must be kept separate and which are complements that strengthen each other can build a uniquely sustainable competitive advantage. Let’s look at how LAN has used that insight to its benefit.

How LAN’s Three Models Interrelate

LAN operates its full-service international passenger-carrier business in much the same way as other global carriers do. It offers frequent flights to major destinations through its own hubs and via alliances with other airlines. It has two classes (coach and business) of amenity-filled service, featuring complimentary hot meals and beverages, multilingual personal-entertainment units in coach, and fully flat beds in business class. 

Likewise, its no-frills domestic operation has essential elements in common with Southwest’s and Ryanair’s: It is a lower-cost, lower-overhead model characterized by fewer amenities, internet ticketing, shorter turnaround times, and a uniform fleet of single-aisle planes from which the kitchens have been removed to increase seating capacity.

Some complementary business models may be so mutually reinforcing that together they turn otherwise unviable possibilities into profitable opportunities.

What sets LAN apart is its cargo business—a premium service like its international passenger operation. It transports salmon from Chile, asparagus from Peru, fresh flowers from Ecuador, and other such perishables to the U.S. and Europe while flying high-value-to-weight merchandise such as computers, mobile phones, and small car parts from the U.S. and Europe to Latin America.

LAN is unusual among passenger carriers in its reliance on cargo revenue—accounting, by the second quarter of 2011, for 31% of its total revenue (compared with less than 5% for American, Delta, and United-Continental). Although Korean Air and Cathay Pacific both also derive about a third of their revenue from cargo, LAN is distinctive in that it transports fully 35% of its shipments in the belly of wide-body passenger aircraft, which serve most of its cargo destinations. In fact, the bulk of LAN’s cargo business operates on the same route network with its passenger business.


In all three of LAN’s models, the key to profitability is the same: flying more planes, more fully loaded, to more places. However, when LAN set out in 2007 to introduce no-frills flights on domestic routes, it knew it could not do that by combining passengers and cargo on those routes. The goal was to increase profitability and preempt the threat from some Latin American version of Ryanair or Southwest, initially on flights within Chile and Peru and later on routes to Argentina, Ecuador, and Colombia.


But on the one hand, demand for air-cargo transport was far lower in domestic markets than it was internationally, given that goods could instead be carried by truck, train, or boat. What’s more, its local markets generated little demand for the perishables that LAN was transporting farther abroad. And perhaps most critically, the narrow-body aircraft used on the short-haul routes were not big enough to carry sufficient cargo.


On the other hand, passenger demand for LAN’s domestic air travel is highly elastic: By lowering fares on short-haul routes by 20%, LAN could attract up to 40% more passengers, enabling it to invest in newer, more efficient planes, which could fly more hours per day. The implication was that the most direct (perhaps the only) way to increase capacity utilization for domestic flights was with low fares, made possible solely by offering a basic level of service to drive down costs.

LAN also has the largest market share of passenger traffic to and from Chile, Peru, and Ecuador, as well as approximately 37% of the Latin American air-cargo market, as its complementary full-service passenger and cargo operations have yielded many mutually 

reinforcing advantages. These include:This logic has been borne out, as lower fares have led to dramatic increases in demand: From 2006 to 2010, the number of passengers on LAN’s domestic flights increased 83% within Chile, 123% in Peru, and 200% in Argentina, allowing LAN to reach its goal of increasing aircraft utilization on its short-haul routes from eight to 12 hours a day. LAN now holds the largest market share of passenger traffic within Chile and Peru and is increasing its market share in other South American countries.

Maximal use of physical assets.


Consider the following example: A LAN flight from Miami arrives in Santiago, Chile, at 5:00 AM. It continues to another Latin American city, say Bogotá, Lima, or Buenos Aires, to deliver cargo from the U.S. Then it returns to Santiago to fly customers back to Miami or New York, because passenger flights to the U.S. from South America are at night. Meanwhile, competitors with no cargo operation are forced to park their aircraft at Santiago’s airport for most of the day. The advantages of increased utilization of as costly an asset as a wide-body aircraft are easy to see.

Reduction of the break-even load factor (BELF).


By combining cargo and passenger operations, LAN can profitably fly where other airlines cannot, because the number of passengers or amount of cargo it needs to break even on each flight is lower than if LAN were transporting only one or the other. In 2010, for instance, the BELF percentage for LAN’s Santiago-Miami route would have been 68% if the aircraft had flown only passengers, but transporting cargo as well lowered it to 50%. What’s more, without cargo, LAN’s Santiago-Madrid-Frankfurt route, to take just one, would have terminated in Madrid, because going on to Frankfurt is not profitable when carrying only passengers.

Diversification of revenues and profits.


By transporting both cargo and passengers, LAN can keep flying routes profitably when demand falls, as the two businesses seldom dip to the same degree in tandem. Even in the depths of the Great Recession in 2009, when cargo demand was down 10.1%, passenger travel dropped by only 3.5%. So LAN did not have to contract operations as much as its cargo-only competitors did, and it consequently was ready the next year to take advantage of renewed demand that those carriers could not accommodate.

Reduced threat of entry by other airlines.


As LAN increases the number of routes it serves, it decreases the probability that other carriers can profitably enter into its markets.

One-stop shop for cargo in Latin America.


The ability to fly more routes profitably creates a virtuous circle. More routes mean more value for customers, enabling LAN to charge premium prices, thereby generating revenue to support even more routes and to eventually become the one-stop shop for cargo distribution in Latin America. (See the exhibit “How Two Business Models Complement Each Other.”) The rock group The Police, for instance, used LAN to transport a stage show that filled two jumbo jets for an eight-concert Latin American tour. Less exotic clients, such as smartphone and computer hardware makers, have proven similarly willing to pay a premium for the convenience of having a single company handle all their shipping needs in Latin America.

The Challenge of Managing Multiple Models


Why doesn’t every airline do what LAN does? Part of the answer is historical: The Cueto family, one of the two groups that purchased LAN when the Chilean government fully privatized it, in 1994, had begun in the cargo business with Fast Air during the 1970s. So the family knew the business well and could readily see, in the context of a combined cargo and passenger service, the profit potential of LAN’s international routes, its wide-body aircraft, and its reputation for reliability.

But to recognize the potential and to capitalize on it are two different things. To say that two models complement each other is not to say that combining them is easy. In fact, the learning curve can be steep, favoring those, like LAN, that climb it first. Among LAN’s chief challenges in combining its cargo and international passenger models, while keeping its low-cost model separate, were these:

Additional complexity.


To plan for both businesses, LAN must dynamically coordinate a sophisticated passenger-yield management system, which raises and lowers ticket prices to manage demand levels, with an active cargo-capacity management system that similarly varies rates on cargo. LAN also needs to assign that cargo optimally to either the passenger or the freight planes, which it does through a complex logistics system that coordinates cargo and passengers. Given that both divisions are profit centers, possible conflicts must be managed carefully.

 Therefore, LAN has imposed an additional criterion for passenger fares that its global long-haul competitors do not need: The lowest passenger fare must be at least as large as the revenue that LAN would obtain if the weight burden of the passengers were allocated to cargo. In this way, LAN gives priority to carrying people in its wide-body passenger aircraft but also ensures that the minimum passenger fare covers the cost of cargo of similar weight.

Broader organizational skills.


LAN’s three businesses require different sales and marketing efforts and a sometimes mind-boggling variety of technical skills to maintain its premium services. For instance, at the same time that LAN was extensively training its flight and maintenance crews for its passenger business (ultimately winning it several awards for service), it needed to train employees in how to care for pigs and horses in its cargo-only planes.

Greater employee flexibility.


Flying more planes to more places means that LAN’s pilots must fly even on two hours’ notice, half the time typical for a U.S. legacy airline. That would not be possible if LAN had not created a culture that fosters flexibility by instituting a performance-related pay and bonus structure, both for management and for administrative and flight personnel. Notably, though, in 2010 LAN’s wages were a lower percentage of its total costs relative to the percentage at many U.S. and European carriers.


No two business models share all resources, of course. In Miami, for example, where LAN’s cargo operations are headquartered, the company has almost 500,000 square feet of dedicated warehouse space and other cargo facilities that its passenger competitors do not need. Furthermore, to serve Latin America comprehensively, regulatory constraints preventing non-national companies from operating within certain countries have impelled LAN to create a series of separate companies for its no-frills short-haul passenger service: LAN Peru, LAN Ecuador, LAN Colombia, and LAN Argentina. It has also set up additional operating structures through alliances in Mexico and several other countries.


Distinguishing Complements From Substitutes

Operating three business models is clearly not without its risks—but meeting the challenge offers uniquely sustainable benefits. LAN was able to minimize the risks and capture the benefits by combining two complementary models and carefully keeping a competing model separate. But how did it tell which was which?

Our analysis suggests that to determine whether two business models are complements or substitutes, executives should consider two questions:

  • To what extent do the business models share major physical assets?
  • To what extent are the resources and capabilities that result from operating each business model compatible?
The greater the number of critical assets the models share, and the greater the number of shared capabilities and resources that result from the operation of the models, the more likely that combining the two models will yield a more valuable result. (See the exhibit “Are Your Business Models Complements or Substitutes?”)

In LAN’s case, the major physical assets are its wide-body planes, which the cargo and international passenger models share but the low-cost domestic operations do not. Equally critical is the cascade of advantage-enhancing resources and capabilities produced by combining the cargo and full-fare passenger models:

  • Decreasing the break-even load factor by combining cargo and passengers, thereby allowing LAN to fly to more places, creates value in both businesses and, thus, expands LAN’s markets and revenues.
  • Using the growing revenues provided by cargo operations to underwrite better service to passengers and vice versa further increases customers’ willingness to pay for both offerings.
  • Flying to more places makes it harder for other airlines to enter and grow in the Latin American market for either cargo orpassengers, which sustains LAN’s advantage.
  • The skills that LAN has had to develop to optimize the use of aircraft and the network of routes for both passengers and cargo have further increased barriers to imitation in both markets.
  • LAN has become the leading passenger airline connecting Latin America to the rest of the world and the one-stop shop for cargo in the region. That increases switching costs for cargo customers and convenience for passengers, further boosting demand for both passenger and cargo service and thereby strengthening LAN’s advantage.
LAN teaches its crews to provide award-winning passenger service while training employees to care for pigs and horses on its cargo-only planes.

LAN’s low-cost domestic business does share in some of those capabilities and resources—the skills developed to efficiently schedule flights and maintain aircraft, the flexibility of its workforce, its understanding of the regulatory requirements for its various Latin American operations, and its capacity to fly customers and cargo to, from, and within Latin America. But LAN’s critical physical assets can’t be shared, and most of the capabilities and resources essential to the domestic operation—the brand, the reputation for low fares, the emphasis on efficiencies to lower costs—conflict with those of a premium, higher-cost offering. Those realities dictate that LAN operate the no-frills model separately.It’s far rarer for two business models to havecritical assets, capabilities, and resources in common than not. That fact no doubt contributes both to the high failure rate of companies that use more than one model at a time and to the sense that firms that even contemplate running multiple models do so at their own risk.

But the lesson of LAN Airlines points to another form of risk—for LAN’s competitors. By mastering three models—and by deeply understanding how complementary models generate unique opportunities—LAN has built, in both passenger and cargo service, formidable competitive advantages that are becoming increasingly difficult for competitors to overcome.

LAN’s competitive advantage in international passenger service would vanish if the company did not have a thriving cargo business; likewise, its advantages in cargo would not exist without a blooming passenger business. Competitive strategy is all about building advantage by protecting a unique position and exploiting a distinctive set of resources and capabilities. Viewed in this light, the implementation of multiple business models is not a risk but rather a new tool for strategists. Properly applied, it will help firms boost their ability to create and capture value—and to gain durable advantage.

Multiple Models 05-31

$
0
0

How Two Business Models Complement Each Other

Simultaneous investment in LAN Airlines’ passenger and cargo businesses creates a virtuous circle by increasing volume and aircraft utilization, which decreases the break-even load factor and increases the attractiveness of new routes. Adding more routes leads to greater economies of scale and scope, boosts customers’ willingness to pay, and increases revenues and profits—thereby providing a funding source for further expansion.





Are Your Business Models Complements or Substitutes?

Business models are more likely to be complements rather than substitutes—and to generate greater value together than apart—if, when you consider these two questions, your answers fall closer to the right side of the spectrum than to the left.

https://hbr.org/resources/images/article_assets/hbr/1201/R1201M_B.jpg

QUESTION 1 To what extent do the business models share major physical assets?

QUESTION 2 To what extent are the resources and capabilities that result from operating each business model compatible?



A new grasp on robotic glove 06-08

$
0
0

A new grasp on robotic glove

Soft, lightweight robotic glove assists with grasping objects independently.

Having achieved promising results in proof-of-concept prototyping and experimental testing, a soft robotic glove under development byConor Walsh and a team of engineers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Wyss Institute for Biologically Inspired Engineering could someday help people suffering from loss of hand motor control regain some of their independence. 





Most patients with partial or total loss of their hand motor abilities due to muscular dystrophy, amyotrophic lateral sclerosis (ALS), or incomplete spinal cord injury report a greatly reduced quality of life because of their inability to perform many activities of daily living. Tasks often taken for granted by the able-bodied — buttoning a shirt, picking up a telephone, using cooking and eating utensils — become frustrating, nearly impossible feats due to reduced gripping strength and motor control.
The stage is now set for that to change, however, thanks to Walsh’s expertise in soft, wearable robotic systems and a development approach that involves the glove’s potential end users in every step of testing and development. The holistic approach ensures that technology development goes beyond simple functionality to incorporate social and psychological elements of design that promote seamless adoption by its end users.
“From the start of this project, we’ve focused on understanding the real-world challenges facing these patients by visiting them in their homes to perform research,” said Walsh, an assistant professor of mechanical and biomedical engineering and founder of the Harvard Biodesign Lab at SEAS, and a core faculty member at the Wyss Institute. A team of undergraduate students contributed to an early glove design as part of his ES227: Medical Device Design course.
Wyss Technology Development Fellow Panagiotis Polygerinos and Kevin Galloway, a mechanical engineer at the institute, incorporated patient feedback at every stage of development in an effort to maximize the glove’s potential for translation.
“Ultimately, patients have to be comfortable with wearing the glove,” said Galloway. “In addition to glove function, we found that people cared about its appearance, which could have a big impact on whether or not the glove would be a welcome part of their daily routine.”
Walsh’s team adapted the mechanics to make the glove feel more comfortable and natural to wearers. Over several iterations of design, the actuators powering movements were made smaller and were modified to distribute force more evenly over the wearer’s fingers and thumb. The resulting soft, multisegment actuators, which are composite tubular constructions of Kevlar fibers and silicone elastomer, support the range of motions performed by human fingers. The glove’s control system is portable and lightweight and can be worn using a waist belt or attached to a wheelchair.
Now, the team is working to improve control strategies that will allow the system to detect the wearer’s intent. One potential solution is to leverage surface electromyography using small electrical sensors in a cuff worn around the forearm. The electromyography sensors detect residual muscle signals fired by motor neurons when the patient attempts a grasping motion and could be used to directly control the glove.
“We are continuing to test the design of the soft robotic glove on patients, in relation to making it customizable for the specific pathologies of each individual and understanding what control strategies work best — but we’re already seeing a lot of exciting proof-of-concept experimental results,” said Walsh. “The current goal is to refine the overall system sufficiently so we can begin a feasibility trial with multiple patients later this year.”
Walsh and his team have been helped in their work by George Whitesides, Harvard’s Woodford L. and Ann A. Flowers University Professor, and SEAS’sRobert Wood, Charles River Professor of Engineering and Applied Sciences, who are also Wyss core faculty members.
The design of the glove has been published in the journal Robotics and Autonomous Systems and the team also recently presented it at the International Conference on Robotics and Automation. In August, the team’s electromyography control work will be presented at the International Conference on Robotics Research in Singapore.
Down the road, the team is interested in developing the glove into a rehabilitation tool for various hand pathologies, and in extending the glove’s assistive functions beyond the joints in the hand toward the development of soft robotic systems that aid impaired elbow and shoulder movements, as well.

Mindfulness Can Literally Change Your Brain 06-08

$
0
0

Mindfulness Can Literally Change Your Brain



The business world is abuzz with mindfulness. But perhaps you haven’t heard that the hype is backed by hard science. Recent research provides strong evidence that practicing non-judgmental, present-moment awareness (a.k.a. mindfulness)changes the brain, and it does so in ways that anyone working in today’s complex business environment, and certainly every leader, should know about.
We contributed to this research in 2011 with a study on participants who completed an eight-week mindfulness program. We observed significant increases in the density of their gray matter. In the years since, other neuroscience laboratories from around the world have also investigated ways in which meditation, one key way to practice mindfulness, changes the brain. 
This year, a team of scientists from the University of British Columbia and the Chemnitz University of Technology were able to pool data from more than 20 studies to determine which areas of the brain are consistently affected. They identified at least eight different regions. Here we will focus on two that we believe to be of particular interest to business professionals.
The first is the anterior cingulate cortex (ACC), a structure located deep inside the forehead, behind the brain’s frontal lobe. The ACC is associated with self-regulation, meaning the ability to purposefully direct attention and behavior, suppress inappropriate knee-jerk responses, and switch strategies flexibly. People with damage to the ACC show impulsivity and unchecked aggression, and those with impaired connections between this and other brain regions perform poorly on tests of mental flexibility: they hold onto ineffective problem-solving strategies rather than adapting their behavior. 
Meditators, on the other hand, demonstrate superior performance on tests of self-regulation, resisting distractions and making correct answers more often than non-meditators. They also show more activity in the ACC than non-meditators. In addition to self-regulation, the ACC is associated with learning from past experience to support optimal decision-making. Scientists point out that the ACC may be particularly important in the face of uncertain and fast-changing conditions.
Source: Tang et al.
(Source: Tang et al.)
Source: Fox et al.
(Source: Fox et al.)
The second brain region we want to highlight is the hippocampus, a region that showed increased amounts of gray matter in the brains of our 2011 mindfulness program participants. This seahorse-shaped area is buried inside the temple on each side of the brain and is part of the limbic system, a set of inner structures associated with emotion and memory.
 It is covered in receptors for the stress hormone cortisol, and studies have shown that it can be damaged by chronic stress, contributing to a harmful spiral in the body. Indeed, people with stress-related disorders likedepresssion and PTSD tend to have a smaller hippocampus. All of this points to the importance of this brain area in resilience—another key skill in the current high-demand business world.
Hölzel et al.
(Source: Hölzel et al.)
These findings are just the beginning of the story. Neuroscientists have also shown that practicing mindfulness affects brain areas related to perception, body awareness, pain tolerance, emotion regulation, introspection, complex thinking, and sense of self. While more research is needed to document these changes over time and to understand underlying mechanisms, the converging evidence is compelling.
Mindfulness should no longer be considered a “nice-to-have” for executives. It’s a “must-have”:  a way to keep our brains healthy, to support self-regulation and effective decision-making capabilities, and to protect ourselves from toxic stress. It can be integrated into one’s religious or spiritual life, or practiced as a form of secular mental training.  When we take a seat, take a breath, and commit to being mindful, particularly when we gather with others who are doing the same, we have the potential to be changed.

Want to Get Ahead? Work on Your Improv Skills 06-08

$
0
0

Want to Get Ahead? Work on Your Improv Skills

MIC LISTEN TO THE PODCAST:

Kelly Leonard and Tom Yorton of The Second City talk about their new book.
Audio Player
00:00
00:00


Authors@Wharton series.Kelly Leonard and Tom Yorton of The Second City comedy theater argue that improvisational comedy and business have more in common than one might first think. In their new book, Yes, And: How Improvisation Reverses ‘No, But’ Thinking and Improves Creativity and Collaboration, they share their insights on innovation and team-building.





Laura Huang: This book had me laughing, cringing and taking notes — all at the same time. There are so many useful tidbits in here and also some really humorous anecdotes. Can you tell us about your motivations for writing Yes, And?

Kelly Leonard: Fifteen years ago, if you had said Second City was going to put its name on a business book, we would have been like, “You’re insane.” But when Tom started with the company, which was about 16 years ago, he brought with him a fresh light to the way Second City was working with clients, and he really expanded upon the business. The collaboration here was interesting because I’ve been at Second City for 26 years. It’s really stage-meets-business because Tom has a business career and I’m a theater guy. Second City is a 56-year-old theater. But what we really are is an innovation laboratory. Over about 56 years, we have had groups of people working together to create something out of nothing. We are a content creator, and we never stop. They keep doing it in these groups, and we’re very, very successful at it.

At a certain point, you go, “That’s got to be translatable.” Look at all the famous people who have leapt from the stages to the screen. I’d like to think it’s my great eye for talent. But it’s not. Because I wasn’t there when Alan Arkin started. I was there when Tina Fey started. But there’s this long tradition of building talent out of these groups to have success. When we started taking it into businesses, and having more and more success, we turned to each other — this was two, three years ago — and said, “God, we’re idiots if we don’t write this book.”

Tom Yorton: Absolutely. For me, business is an act of improvisation.  Twitter For all the planning, all the controls, all the governance, and all the things we try to do to keep the variables down, business doesn’t cooperate. The world is a gray place. This improv toolkit that we talk about is really important. It has never been more important than it is now. That was all part of the motivation for writing the book.

Huang: What I was really struck by was the way you were able to put this framework around teaching soft skills. Can you talk a little bit about these soft skills?

Yorton: I don’t think, in any part of my education, anyone ever taught me to listen. Listening is kind of important, it turns out. In fact, it’s vital…. When you improvise, you do practice it. You have to. So, there are specific listening exercises we offer in the book that people can take home with them. But you’ve got to put it into practice. Everyone understands the need to go to a gym to work out your muscles. But where do you go to work out your social skills? Improvisation is yoga for your social skills. It puts you in a mindful, present place, where you’re concentrating with eye contact with the person in front of you. You’re not thinking about before, or about after. When you’re operating “Yes, And,” which is the title of our book, you’re not saying no. You are in agreement and affirming, and you’re building something with someone else. The way you feel after you do that, especially after a three-hour improv class, is the best. If we can bring that best self into our workplace, everything gets better.

Leonard: We talk about the soft skills that separate the stars from the also-ran in business. It’s how to listen, how to read a room, how to work collaboratively on teams, how to respond to failure and how to be nimble and agile and adaptive when the unexpected happens. Those are really foreign skills to many people. You could have all the quantitative skills, and you could have all the strategy skills, and all that other stuff. They are important skills to have — make no mistake –but unless you can work well with an ensemble, create something out of nothing, and respond to the unexpected, you’re only gonna go so far in business.

Mind = Blown! Reality Doesn't Exist? Until We Measure It! Quantum...06-09

$
0
0

Mind = Blown! Reality Doesn't Exist? Until We Measure It! Quantum...

Reality doesn’t exist until we measure it, quantum experiment confirms 




Australian scientists have recreated a famous experiment and confirmed quantum physics's bizarre predictions about the nature of reality, by proving that reality doesn't actually exist until we measure it - at least, not on the very small scale.
That all sounds a little mind-meltingly complex, but the experiment poses a pretty simple question: if you have an object that can either act like a particle or a wave, at what point does that object 'decide'?
Our general logic would assume that the object is either wave-like or particle-like by its very nature, and our measurements will have nothing to do with the answer. But quantum theory predicts that the result all depends on how the object is measured at the end of its journey. And that's exactly what a team from the Australian National University has now found.
"It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it," lead researcher and physicist Andrew Truscott said in a press release.
Known as John Wheeler's delayed-choice thought experiment, the experiment was first proposed back in 1978 using light beams bounced by mirrors, but back then, the technology needed was pretty much impossible. Now, almost 40 years later, the Australian team has managed to recreate the experiment using helium atoms scattered by laser light.
"Quantum physics predictions about interference seem odd enough when applied to light, which seems more like a wave, but to have done the experiment with atoms, which are complicated things that have mass and interact with electric fields and so on, adds to the weirdness," said Roman Khakimov, a PhD student who worked on the experiment.
To successfully recreate the experiment, the team trapped a bunch of helium atoms in a suspended state known as a Bose-Einstein condensate, and then ejected them all until there was only a single atom left. 
This chosen atom was then dropped through a pair of laser beams, which made a grating pattern that acted as a crossroads that would scatter the path of the atom, much like a solid grating would scatter light.
They then randomly added a second grating that recombined the paths, but only after the atom had already passed the first grating.
When this second grating was added, it led to constructive or destructive interference, which is what you'd expect if the atom had travelled both paths, like a wave would. But when the second grating was not added, no interference was observed, as if the atom chose only one path.
The fact that this second grating was only added after the atom passed through the first crossroads suggests that the atom hadn't yet determined its nature before being measured a second time. 
So if you believe that the atom did take a particular path or paths at the first crossroad, this means that a future measurement was affecting the atom's path,explained Truscott. "The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behaviour was brought into existence," he said.
Although this all sounds incredibly weird, it's actually just a validation for the quantum theory that already governs the world of the very small. Using this theory, we've managed to develop things like LEDs, lasers and computer chips, but up until now, it's been hard to confirm that it actually works with a lovely, pure demonstration such as this one.

Google Wants You to Control Your Gadgets with Finger Gestures, Conductive Clothing 06-09

$
0
0

Google Wants You to Control Your Gadgets with Finger Gestures, Conductive Clothing



New Google technology addresses the tiny screen problem by letting you control wearables with tiny gestures, or by touching your clothes.

Small gadgets such as smart watches can be frustrating to use because their tiny buttons and touch screens are tricky to operate. Google has two possible solutions for the fat finger problem: control your gadgets by subtly rubbing your finger and thumb together, or by swiping a grid of conductive yarn woven into your clothing.


The first of those two ideas works thanks to a tiny radar sensor that could be integrated into, say, a smart watch and can detect fine motions of your hands from a distance and even through clothing. Levi Strauss announced today that it is working with Google to integrate fabric touch panels into its clothing designs. The new projects were announced at Google’s annual developer conference in San Francisco Friday by Ivan Poupyrev, a technical program lead in Google’s Advanced Technology and Projects research group.

The current prototype of Google’s radar sensor is roughly two centimeters square. It can pick up very fine motions of your hands at distances from five centimeters up to five meters.
Poupyrev showed how he could circle his thumb around the tip of his forefinger near the sensor to turn a virtual dial. Swiping his thumb across his fingertip repeatedly scrolled through a list.
“You could use your virtual touchpad to control the map on the watch, or a virtual dial to control radio stations,” said Poupyrev. “Your hand can become a completely self-contained interface control, always with you, easy to use and very, very, ergonomic. It can be the only interface control that you would ever need for wearables.”
Poupyrev also showed how he could perform the same motion in different places to control different things. He used the scrolling gesture to adjust the hour on a digital clock, then moved his hand about a foot higher and used the same motion to adjust the minutes.
No details were given on what kind of devices the radar sensor might be built into. But Poupyrev did say the sensors can be mass produced, and he showed a silicon wafer, made by the chip company Infineon, covered in many of the devices.

Google’s woven touch sensor technology is based on a new way to make conductive fiber developed by Poupyrev and colleagues as part of an effort that Google is calling “Project Jacquard.” Conductive yarn was already on the market, but only in the color gray, he said. Google has developed a way to braid slim copper fibers with textile fibers of any color to make conductive yarn that can be used in existing fabric and garment factories just like yarns they use today, said Poupyrev.
“We want to make interactive garments at scale so everyone can make them and everyone can buy them,” he said. Poupyrev showed images of stretchable and semi-transparent fabrics with the touch-detecting yarn woven in.
Rather than being an alternative to a conventional touch screen, the textile touch panels are intended to provide a quicker and subtler way to interact with a phone in your pocket or device on your wrist, for example, to dismiss a notification.
Poupyrev waved his hand over what looked like a swatch of ordinary fabric to show how a grid of conductive yarn woven into it could detect the presence of his hand and also when he touched it with a finger. It could also track two finger touches at the same time.
Levi Strauss has agreed to work with Google on integrating the technology into clothing, but no details were given about when touch-responsive clothing might become available to buy.
Poupyrev said they are still working on how to best integrate the electronics, wireless communications, and batteries into a textile touch panel. The only demonstration of how the technology might operate in a garment came in a video in which a Savile Row tailor made a jacket with a touch-responsive patch above the cuff on one sleeve. When a finger swiped across the panel, a nearby smartphone made a call.

What Successful Project Managers Do 06-09

$
0
0

What Successful Project Managers Do

Traditional approaches to project management emphasize long-term planning and a focus on stability to manage risk. But today, managers leading complex projects often combine traditional and “agile” methods to give them more flexibility — and better results.

Coping with frequent unexpected events requires an organizational culture that allows the project manager to exercise a great amount of flexibility. Here are two examples of advanced organizations that took steps to modify their cultures accordingly.In today’s dynamic and competitive world, a project manager’s key challenge is coping with frequent unexpected events. Despite meticulous planning and risk-management processes, a project manager may encounter, on a near-daily basis, such events as the failure of workers to show up at a site, the bankruptcy of a key vendor, a contradiction in the guidelines provided by two engineering consultants or changes in customers’ requirements.

Such events can be classified according to their level of predictability as follows: events that were anticipated but whose impacts were much stronger than expected; events that could not have been predicted; and events that could have been predicted but were not. All three types of events can become problems that need to be addressed by the project manager. The objective of this article is to describe how successful project managers cope with this challenge.

A group of 23 project managers who had come from all over NASA to participate in an advanced project management course declared mutiny. They left the class in the middle of the course, claiming that the course text, based on NASA’s standard procedures, was too restrictive for their projects and that they needed more flexibility. With the blessing of NASA’s top leadership, the class members then spent four months conducting interviews at companies outside of NASA. This led to a rewriting of numerous NASA procedures. 

Among other things, NASA headquarters accepted the group’s recommendation to give NASA project managers the freedom to tailor NASA’s standard procedures to the unique needs of their projects. A similar movement to enhance project managers’ flexibility occurred at Procter & Gamble, where the number of procedures for capital projects was reduced from 18 technical standards and 32 standard operating procedures to four technical standards and four standard operating procedures.

Concurrent with these changes at NASA and P&G, a heated debate emerged within the wider project management profession regarding the need for flexibility, as opposed to the traditional approach, which emphasizes that project success depends on stability. According to the traditional approach, project success can be achieved by focusing on planning and on controlling and managing risks. Although the popularity of this approach has sharply increased across industries, research covering a wide variety of projects consistently reveals poor performance. A large percentage of projects run significantly over budget and behind schedule and deliver only a fraction of their original requirements.



The other side in this debate is best represented by a newer project management approach popular within the software industry. Called the agile method, it asserts that project success requires enormous flexibility throughout the project’s life. However, even proponents of the agile approach acknowledge that this approach is best suited to small projects and teams.

Our studies, employing experiential data collected from more than 150 successful project managers affiliated with more than 20 organizations, indicate that today’s successful project managers cope with unexpected events by a combination of the traditional and agile approaches, assuming four roles. (See “About the Research.”) Two of the roles are intention-driven and two are event-driven, with each role assumed on its own time schedule throughout the life of the project. The first role, developing collaboration, is performed early on during the project. The second role, integrating planning and review with learning, is performed periodically. The third role, preventing major disruptions, is performed occasionally. The fourth role, maintaining forward momentum, is performed continuously.5(See “The Four Roles of the Project Manager.”)

About the Research

In recent years, many researchers have concluded that one reason for the widespread poor statistics about project results is the wide gap between research and practice.i The overall objective of our research was to develop a practice-based theory of project management.ii To this end, we used three complementary approaches to collect firsthand data on the practices of successful project managers. Believing that management is best learned by emulating exemplary role models, we focused our studies on a selective sample of the best practitioners in their respective organizations.

Our first approach consisted of field studies and structured research tools, particularly 40 interviews (two to four hours each) and 20 observations (four hours to a week each) of practitioners in the following organizations: AT&T, Bechtel (the San Francisco-based construction and civil engineering company), DuPont, General Motors, IBM, Motorola, PPL Electric Utilities (an electric utility company based in Allentown, Pennsylvania), Procter & Gamble and Turner Construction Company (a construction services company headquartered in New York City).
For our second approach, we convened project teams and facilitated reflective dialogues in which participants shared their stories and practices from recent projects. We collected most of the cases, stories and practices through our role as the facilitators of the project management knowledge-development and -sharing communities in three organizations. In this capacity, Laufer and Hoffman worked for five years with NASA, Laufer and Cameron worked for three years with P&G and Laufer and Russell worked for two years with Boldt (a construction services company based in Appleton, Wisconsin). Project managers from the following organizations participated in these community of practice meetings: AeroVironment (a technology company based in Monrovia, California), Boldt, The Johns Hopkins University Applied Physics Laboratory, Lockheed Martin, NASA, Procter & Gamble, Raytheon and the U.S. Air Force.
To make sure that the principles we developed were a valid interpretation of the stories we had collected, we adopted a third approach — testing our interim results in real-life situations. Through consulting engagements with four project-based organizations — Boldt, Parsons Brinckerhoff (the multinational engineering and design firm headquartered in New York City), Skanska (the Scandinavian construction and property development group) and Turner Construction — we validated and refined our understanding and developed the four-role framework presented in the current article. 
We then tested and refined this framework in our work with the Boldt project management knowledge-development and -sharing community. The model presented in this article is the result of a final refinement process, which included a series of interviews with 10 project managers and 10 senior managers. We held these interviews (two to three hours long) with a carefully selected group of practitioners from companies that represented a variety of industries, including Cedars-Sinai Medical Center, NASA, PricewaterhouseCoopers, P&G and the U.S. Air Force.

1. Develop Collaboration

Since project progress depends on the contribution of individuals who represent different disciplines and are affiliated with different parties, collaboration is crucial for the early detection of problems as well as the quick development and smooth implementation of solutions. The importance of collaboration can be demonstrated by the following two examples in which projects failed.
Tim Flores analyzed the causes for the different outcomes of three Mars exploration missions initiated by NASA’s Jet Propulsion Laboratory: Pathfinder, Climate Orbiter and Polar Lander. Although all three projects were conducted under the same guiding principles, were of comparable scope and shared many elements (even some of the same team members), Pathfinder was a success, whereas the other two missions failed. Flores expected to find that the Pathfinder project differed from the other projects in a variety of factors, such as resources, constraints and personnel. 
Although this was true to some extent, he found that the primary factor distinguishing the successful mission from the failed missions was the level of collaboration. The Pathfinder team developed trusting relationships within a culture of openness. Managers felt free to make the best decisions they could, and they knew that they weren’t going to be harshly punished for mistakes. That trust never developed in the other two projects.
A different NASA project, the Wide-Field Infrared Explorer (WIRE) mission, was designed to study the formation and evolution of galaxies. Its telescope was so delicate it had to be sealed inside a solid hydrogen cryostat. When, shortly after launch, a digital error ejected the cryostat’s cover prematurely, hydrogen was discharged with a force that sent the Explorer craft tumbling wildly through space, and the mission was lost.
Jim Watzin, a project manager at NASA and a member of the WIRE project team, had this to say regarding the official report that NASA issued following the WIRE failure: “WIRE failed because people could not or would not communicate well with each other. … Individuals ... simply were uncomfortable allowing others to see their work.” Watzin added: “The real [lesson] from this loss is that any team member that does not participate as a true team player should be excused [from the project].”
In the next two examples, project success can be attributed to the project manager’s deliberate attempt to develop collaboration. (Note that in the discussions that follow, we use only the project managers’ first names.)
Allan, the payload manager for NASA’s Advanced Composition Explorer project at the Jet Propulsion Laboratory, has described how he developed trust between his team and the 20 groups of scientists developing instruments for the project, who were based at universities throughout the United States and Europe. Allan devised a three-stage plan. First, he selected team members who could operate in a university environment — people who knew when to bend or even break the rules. Second, he relocated his JPL team to a university environment (California Institute of Technology), recognizing that it might be difficult to develop an open, flexible culture at JPL. Third, he came up with an uncommon process for interacting with the scientists.
The challenge, with regard to interaction, was getting the scientists to regard his JPL team as partners. Having dealt with NASA before, they tended to believe that someone coming from JPL would demand a lot of paperwork, lay out sets of rules to be followed and expect things to be done a certain way. In fact, many of the scientists weren’t sure they should share with Allan’s team the problems they were encountering along the way — problems that could slow down the project’s progress.
When unexpected events affect one task, many other interdependent tasks may also be quickly impacted. Thus, solving problems as soon as they emerge is vital for maintaining work progress.
The primary role of Allan’s team was to review the development of the instruments, and Allan believed that the best way to do this was by focusing on trust and convincing the scientists that his team was there to help them solve their problems. To facilitate this, Allan and his team of five to eight members traveled to each university and stayed on site for an extended period of time. By spending days and nights with the scientists and helping them solve their problems — not as auditors but as colleagues — the JPL team gradually became accepted as partners.
Most projects are characterized by an inherent incompatibility: The various parties to the project are loosely coupled, whereas the tasks themselves are tightly coupled. When unexpected events affect one task, many other interdependent tasks are quickly affected. Yet the direct responsibility for these tasks is distributed among various loosely coupled parties, who are unable to coordinate their actions and provide a timely response. Project success, therefore, requires both interdependence and trust among the various parties.
However, if one of the parties believes that project planning and contractual documents provide sufficient protection from unexpected problems, developing collaboration among all the parties may require creative and bold practices.
This was the case in a large construction project that P&G launched at one of its European plants. After the contractor’s project manager, Karl, brushed off numerous team-building efforts, Pierre, the P&G project manager, finally found an opportunity to change Karl’s attitude. Three months into construction, the contractor accidentally placed a set of foundations 10 inches inside the planned periphery and poured about 600 lineal feet of striped foundation in the wrong place. Instead of forcing the contractor to fix his mistake and start over — a solution that would have damaged the contractor’s reputation and ego — Pierre chose a different approach. 
Through several intensive days of meetings and negotiations with the project’s users and designers, he was able to modify the interior layout of the plant, thereby minimizing damage to the users without having to tear down the misplaced foundations and hurt the project’s schedule. The financial cost of making the changes incurred by the contractor’s mistake was significant, but the loss in reputation was minimal. As a result, Karl gradually embraced Pierre’s working philosophy — namely, “If they fail, we fail.” The realization that the organizations involved in the project are all interdependent led to the development of a collaborative relationship.

2. Integrate Planning and Review With Learning

Project managers faced with unexpected events employ a “rolling wave” approach to planning. Recognizing that firm commitments cannot be made on the basis of volatile information, they develop plans in waves as the project unfolds and information becomes more reliable. With their teams, they develop detailed short-term plans with firm commitments while also preparing tentative long-term plans with fewer details. To ensure that project milestones and objectives are met, these long-term plans include redundancies, such as backup systems or human resources.11
One key difference between the traditional planning approach, in which both short- and long-term plans are prepared in great detail, and the rolling wave approach becomes evident when implementation deviates from the plan. In the traditional planning approach, the project team attempts to answer the question: Why didn’t our performance yesterday conform to the original plan? In the rolling wave approach, project managers also attempt to answer the question: What can we learn from the performance data to improve the next cycle of planning? In particular, they attempt to learn from their mistakes — to prevent an unexpected event from recurring.
Successful project managers do not limit the learning process to the planning phase but also use it for project reviews. For example, after a review session in the midst of a project at NASA’s Goddard Space Flight Center, Marty was a frustrated project manager. The existing review process may have fulfilled upper management’s need to control its operations, but Marty felt it did not fulfill his team’s need to learn. Therefore, he modified the process to give his team the best input for identifying problems and the best advice for solving them. This meant doing away with the usual “trial court” atmosphere at NASA review sessions, where team members’ presentations were often interrupted by review board members’ skeptical comments and “probing the truth” questions. In its place, Marty developed a review process that provided feedback from independent, supportive experts and encouraged joint problem solving rather than just reporting.
The first thing Marty did was unilaterally specify the composition of the review panel to fit the unique needs of his project, making sure that the panel members agreed with his concept of an effective review process. The second thing he did was change the structure of the sessions, devoting the first day to his team’s presentations and the second day to one-on-one, in-depth discussions between the panel and the team members to come up with possible solutions to the problems identified on the first day. This modified process enabled Marty to create a working climate based on trust and respect, in which his team members could safely share their doubts and concerns. At the end of the second day, the entire panel held a summary meeting. It was agreed that the review session had been a big success. In fact, other NASA project managers quickly adopted Marty’s process, including it in their managerial tool kits.
Successful managers of more traditional projects, such as designing and building manufacturing facilities, also practice learning-based project reviews. P&G has replaced review panels composed of external experts or senior managers with peer-review panels. These last four to eight hours and follow a simple protocol: First, the project team concisely communicates its technical and execution strategies, and then the floor is opened to all the invited peers for comments, critique and clarifying questions. Out of the numerous notes documented throughout the review process, five to 10 “nuggets” usually emerge that the project team uses to improve the technical, cost and scheduling aspects of the project. Sometimes, the invited peers even take one or two of the “nuggets” back to their own projects.

3. Prevent Major Disruptions

In their book Great by Choice, Jim Collins and Morten T. Hansen describe one of the core behaviors of great leaders as “productive paranoia.” Even in calm periods, these leaders are considering the possibility that events could turn against them at any moment and are preparing to react. Similarly, successful project managers never stop expecting surprises, even though they may effect major remedial changes only a few times during a project. They’re constantly anticipating disruptions and maintaining the flexibility to respond proactively. The following two examples illustrate that, when convinced that a change is unavoidable, a successful project manager acts as early as possible, since it is easier to tackle a threat before it reaches a full-blown state.
NASA’s Advanced Composition Explorer project, discussed earlier, was plagued from the start with severe financial problems arising from internal and external sources. Internally, the development of the nine scientific instruments led very quickly to a $22 million cost overrun. Externally, the project, which was part of a larger NASA program, inherited part of a budget overrun in an earlier project. As a result of these internal and external factors, the ACE project experienced frequent work stoppages, forcing the manager to constantly change his contractors’ and scientists’ work priorities.
Don, the project manager, believed that without immediate changes the project would continue down the same bumpy road, with the likely result that cost and time objectives would not be met. To prevent this, he made an extremely unpopular decision: He stopped the development of the instruments, calling on every science team to revisit its original technical requirements to see how they could be reduced. In every area — instruments, spacecraft, ground operation, integration and testing — scientists had to go back and ask such questions as: How much can I save if I take out a circuit board — and how much performance will I lose if I do take it out?
At the same time, Don negotiated a new agreement with NASA headquarters to secure stable funding. To seal the agreement, he assured them that, by using descoping tactics, the project would not go over budget. With the newly stable budget and the project team’s willingness to rethink its technical requirements, the ACE project gradually overcame its technical and organizational problems. Completed early and below budget, the spacecraft has provided excellent scientific data ever since.
To keep costs under control, Terry decided to have two contractors compete for the final contract. Terry quickly realized that both contractors were approaching the development too conservatively and that unless he took a more radical approach, the project would be canceled again. Therefore, he told the contractors to completely disregard the military standards and adhere to only three key performance parameters. One of the contractors, Lockheed Martin, took this directive seriously and changed its approach dramatically. It decided to build the missile fuselage not out of metal but out of composites. And to accomplish this, it found a company that made baseball bats and golf club shafts. 
The company had never built a military product, but it knew how to weave carbon fiber and was open-minded. Following trials with several prototypes, this company was able to manufacture a product of the highest quality. Lockheed Martin transformed this small company from a baseball bat provider to a cruise missile supplier, which led to Lockheed Martin winning the contract — as well as to remarkable cost reductions.The second example of preventing a major disruption from occurring took place during the Joint Air-to-Surface Standoff Missile, or JASSM, project. In this case, the Pentagon had decided to make another attempt to develop JASSM after the first attempt was aborted due to a cost overrun of more than $2 billion. The original project manager for the second attempt was dismissed in midcourse due to poor performance, and a new project manager, Terry, replaced him.

4. Maintain Forward Momentum

As noted earlier, when unexpected events affect one task, many other interdependent tasks may also be quickly impacted. Thus, solving problems as soon as they emerge is vital for maintaining work progress. As Leonard R. Sayles and Margaret K. Chandler wrote in their 1971 book Managing Large Systems, “In working to maintain a forward momentum, the manager seeks to avoid stalemates. Another penalty for waiting is that in a good many situations, corrective action is possible only during a brief ‘window.’ … The heart of the matter is quickness of response.” In a study of project managers on construction sites, it was found that they addressed (not necessarily solved) 95 percent of the problems during the first seven minutes following problem detection.
In a recent knowledge development meeting, a group of 20 project managers at The Boldt Company, a construction services company based in Appleton, Wisconsin, focused on how best to cope with unexpected events. It became evident that most of the managers employed three complementary practices: hands-on engagement; frequent face-to-face communication; and frequent moving about.
Regarding hands-on engagement, one project manager, Charlie, said that to solve problems he often engaged in activities such as making phone calls, convening urgent meetings and taking trips to local retail stores to purchase missing parts. Documenting the time it took him to resolve 10 recent problems, Charlie reported that three were resolved within 30 minutes, three within 60 minutes, and three in less than one day; one problem took two days until it was resolved. Charlie also said that, because of his quick responses, he made one mistake. However, he was able to quickly repair its damage the following day. The entire group at Boldt agreed that maintaining forward momentum was more important than always being right.
The second practice, frequent face-to-face communication, was described by Matt, one of the project managers, in terms of “daily 10-minute huddles” with all the on-site team members (the superintendent, field engineers, project coordinator and safety officer). Matt used these informal morning meetings to share the latest instructions from the client and to ensure that team members understood one another’s current workloads and constraints and understood how they could help one another. Very often, the meetings enabled the team to identify and resolve conflicting priorities before they became problems. Matt noted that, while the primary purpose of the huddle was to update everyone, it also reinforced a spirit of camaraderie and a sense of shared purpose. As a result, these meetings turned out to be very valuable for sustaining teamwork.
As for the third practice, frequent moving about, one project manager, Tony, described the three primary outcomes of spending 30 minutes a day roaming around the project site. First, he was able to develop rich and open communication with his team members. Tony explained that while many workers did not feel safe asking him questions during various formal meetings, they felt very comfortable interacting with him freely during his on-site visits, which had a great impact on their motivation. Second, receiving immediate information, and in particular a greater range of information, enabled him to identify problems early on. At times, he was able to detect conflicts before they actually became an issue. Third, Tony developed a much better understanding of where the project was with respect to the schedule, rather than having to take someone’s word for it. He found that coming to the weekly and monthly planning and scheduling meetings equipped with firsthand, undistorted information allowed him to address questions and solve problems much better. The Boldt project managers did not agree on the preferred timing for moving about and, in particular, whether one should schedule the visits, as Tony did, or leave their timing flexible. However, they all agreed that moving about is a most effective practice that should be applied as often as possible.
These three practices are not limited to construction projects. For example, in the previously mentioned JASSM project, which was geographically dispersed, all three practices necessary to maintain forward momentum were employed by the various project managers at each production site. Additionally, Terry, the customer’s project manager, spent much of his time moving about between all the different production sites.

Implications for Senior Managers

Although every project manager tries to minimize the frequency and negative impact of unexpected events, in today’s dynamic environment such events will still occur. Acknowledging the emergence of a problem is a necessary first step, allowing the project manager to respond quickly and effectively. Some organizations assume that almost all problems can be prevented if the project manager is competent enough — resulting in project managers who are hesitant to admit that they are facing an emerging problem. In fact, a recent study indicates that project managers submit biased reports as often as 60 percent of the time. When upper management fosters an organizational climate that embraces problems as an inherent part of a project’s progression, project managers are able to detect and resolve problems more successfully.
Management scholar Henry Mintzberg argues that today’s managers must be people-oriented, information-oriented and action-oriented. In contrast, the two prevailing project management approaches, the traditional approach and the agile approach, do not require project managers to encompass all three orientations. The traditional approach (primarily intention-driven) stresses information, whereas the agile approach (primarily event-driven) stresses people and action.
By assuming the four roles discussed in this article, the successful project managers we studied are both intention- and event-driven and embrace all three orientations. Developing collaboration requires them to be people-oriented. Integrating planning and review with learning requires them to be information-oriented. Preventing major disruptions requires them to be action-oriented. Finally, maintaining forward momentum, which is pursued throughout a project, requires them to adopt all three orientations. Senior managers must ensure that all three orientations are considered when selecting project managers and developing project management methodologies.



Staying in the Know 06-10

$
0
0

Staying in the Know


In an era of information overload, getting the right information remains a challenge for time-pressed executives. Is it time to overhaul your personal knowledge infrastructure?

A common thread runs through many recent corporate setbacks and scandals. In crises ranging from BP’s Deepwater Horizon oil spill debacle to the Libor rate-fixing scandal in the City of London, the troubles simmered below the CEO’s radar. By the time the problems were revealed, most of the damage had arguably already been done. Despite indications that large companies are becoming increasingly complicated to manage,1 executives are still responsible for staying abreast of what’s going in their organization. But how do you keep tabs on what your competitors and employees are doing? How do you spot the next big idea and make the best judgments? How do you distinguish usable information from distracting noise? And how do you maintain focus on what’s critical?

Many management experts have assumed that better information systems and more data would solve the problem. Some have pushed for faster and more powerful information technologies. Others have put their faith in better dashboards, big data and social networking. But is better technology or more tools really the most promising way forward? We think not. In this article, we maintain that the capacity of senior executives to remain appropriately and effectively knowledgeable in order to perform their jobs is based on a personal and organizational capability to continually “stay in the know” by assembling and maintaining what we call a “personal knowledge infrastructure.” And while information technologies may be part of this personal knowledge infrastructure, they are really just one of the components.

We are not the first researchers to make this claim. More than 40 years ago, organizational theorist Henry Mintzberg suggested that information was central to managerial work and that the most important managerial roles revolved around information (monitoring, disseminating and acting as a spokesperson). Mintzberg described managers as the nerve centers of organizations and said informational activities “tie all managerial work together.”2 Other researchers suggested that management itself could be considered a form of information gathering and that we are quickly moving from an information society to an attention economy, where competitive advantage comes not from acquiring more information but from knowing what to pay attention to.3 Later research confirmed that dealing with information is critical and found that managers’ communication abilities are directly related to their performance.4

While the importance of informational roles and activities is well established, we take the idea a step further, arguing that managers — and especially senior executives — are only as good at acquiring and interpreting critical information as their personal knowledge infrastructures are. Managers rely on specific learned modes to manage and allocate their attention.5 However, how we pay attention is not simply a matter of internal mental processes that we can do little about. Rather, attentiveness (in other words, the capacity to stay on top, and the ability to distinguish between what matters and what doesn’t) mostly stems from what managers do or don’t do, whom they talk to and when, and what tools and tricks of the trade they use. In short, attentiveness relies on and is facilitated by things we can observe — and things we can do something about.

Technologies and new tools are not and cannot be “silver bullet” solutions. At times, simpler things such as talking to customers or networking with board members may be more important, provided they are done methodically and with some purpose. Selecting when particular elements are appropriate depends on the circumstances. As a result, understanding and, when needed, overhauling one’s personal knowledge infrastructure should be routine. In this article, we explain how this can be done, drawing on insights obtained by shadowing individual CEOs as they went about their daily jobs.6

About the Research

To uncover how top executives deal with information and knowledge, we conducted an observation-based exploratory study using a rigorous ethnographic protocol successfully employed in the past. We followed seven chief executives through their working days for several weeks. We went where they went, watched what they did, listened to what they and others said and asked what was going on when we did not understand. We also discussed our findings with them and with invited colleagues as part of structured feedback sessions.

Our sample comprised seven CEOs of acute and mental health organizations that are part of the National Health Service in England. In England, health care is provided by public sector bodies called trusts. Our sample included organizations that run multiple hospitals, have an annual budget of more than 500 million pounds and have up to 10,000 employees. The CEOs have both legal and financial responsibility. The sample included both men and women (3:4). The CEOs had diverse professional backgrounds (NHS management, private sector, nursing and medical) and were at different points in their careers, both in terms of tenure in their present post and overall experience at the CEO level. The sample also included organizations with different performance levels according to indicators by which their performance was monitored by national regulators (for example, financially sound vs. struggling).

CEOs were observed for five or more weeks, apart from one subject, where observations lasted 3½ weeks. The researchers had good access to the CEOs and were able to document nearly all aspects of their work, with exceptions such as one-to-one supervisory meetings with junior colleagues, HR-related meetings concerning individuals and private meetings with patients. When CEOs worked from home, data was collected in interviews afterwards. We conducted semi-structured interviews with five of the CEOs and a number of informal interviews with the other two CEOs. In addition, we conducted two formal interviews with two different personal assistants, which were recorded and lasted approximately half an hour each. Additional data came from meeting papers, articles referenced by the CEOs and copies of publications consulted, and they were supplemented by externally available information such as annual trust reports and regulatory documents. Following the study, our results were shared with two groups of CEOs at dedicated sector events. Those CEOs helped us to refine our findings and elaborate on the notion of the personal knowledge infrastructure.

Our research is based on a two-year study of the day-to-day work of seven CEOs of some of the largest and most challenging hospital- and mental health-based organizations in England. (See “About the Research.”) We chose to study health-care executives because they sit at the crossroads between the private and public sectors and therefore are expected to meet multiple, often competing, demands. To say that the informational landscapes in these organizations are complex is an understatement. Yet the organizations are increasingly subject to pressures to become more transparent, even as they compete with each other. Therefore they seemed to be a good choice as settings for studying the challenges of using information and knowledge to stay on top and ahead of the curve. Throughout our research we sought to answer a simple question: How did the CEOs know what they needed to know in order to be effective at their jobs?

“Nothing but Talking”

One of the first things that struck us was that, in contrast to the popular image of CEOs as lonely, heroic decision makers, the individuals we studied did not seek information or utilize discrete pieces of evidence for the purpose of making decisions. Rather, they often sought something much more ordinary: to make themselves knowledgeable in order to be ready for any eventuality, so that they could understand what to do next. Indeed, one of their main preoccupations appeared to be staying on top of what was happening within and around their organizations. As one put it: “The worst thing for a CEO is to find yourself asking after the fact: How could this happen without me knowing?”

Notably, staying on top was not a separate activity in addition to what the CEOs already did, but rather something they mostly did without thinking and without noticing — or something that they achieved while doing something else. Calling a former colleague who was in the news for the wrong reasons (an accident, a bad report from inspectors, protests about the closure of a loss-making hospital) can produce multiple outcomes: reinforcing a relationship and demonstrating solidarity, but also finding out what is going on. Indeed, many of the CEOs had difficulty acknowledging that checking in with people was an integral part of their job — hence the often-heard comment, “I do not know what has happened to my workday … It seems I have done nothing but talking.”

The Personal Knowledge Infrastructure

The CEOs we studied didn’t leave the process of staying informed to chance. Rather, they relied on a habitual and recurrent set of practices, relationships and tools of the trade, which constituted a personal knowledge infrastructure that supported them in their daily tasks of understanding, foreseeing and managing. This tacit and rarely discussed infrastructure, which was very different from their IT system, helped them to know what needed to be done and to get a sense of the right way forward. What made some CEOs more effective than others was not merely the characteristics of the individual components of their personal knowledge infrastructure but also the quality of the whole and its fit with the specific needs of the job. This personal knowledge infrastructure comprised three main elements: routine practices, relationships, and tools and technologies.

First, every CEO had a set of routine practices she or he relied on — things such as checking the morning news, running periodic review meetings, dropping by immediate collaborators’ offices to ask what was often “just a quick question,” walking around and occasionally even going to the cafeteria “to check how things are going.” These practices were not just internal. The CEOs also met with board members and managers of other organizations, attended conferences and staff events and participated in ceremonial functions such as charity events. Some of the gatherings were framed as leisure opportunities (having a drink, playing golf), but they weren’t entirely social. CEOs returned from such events with information and news they shared with various associates. Similarly, sitting on boards of other organizations was often seen as a necessary evil that helped the CEOs get a broader overview of what was going on beyond their organization.

Second, the personal knowledge infrastructure contained a number of social relationships. Like prior researchers, we found that most CEOs’ work was conducted verbally and was accomplished with and through others. Our CEOs used their relationships both to gather information and to make sense of it.7 For example, every CEO we studied carefully cultivated strategic relationships within and outside the organization. These relationships were usually engineered to produce a combination of breadth and depth of intelligence. Their network of contacts constituted a form of social capital that had been accumulated over time. On many occasions, we observed CEOs interacting with long-term colleagues, previous members of their staff and people with whom they had done business. They used their contacts to gather weak signals (on the principle that today’s gossip can become tomorrow’s news), triangulate information and confirm or contradict their evolving insights. Some of the CEOs were extremely strategic and nurtured relationships with various stakeholders (for example, management consultants, politicians or local leaders), whom they saw regularly for dinner or a drink. Some CEOs also relied on small groups of peers, whom they met with on a regular basis. These groups of peers, who were often also “comrades in adversity” facing similar challenges, operated both as a support group and as a precious setting where sensitive information was exchanged on the basis of reciprocity.

However, not all of the contacts were treated the same way.8 The CEOs appeared to have an informal hierarchy: those who were more distant, who could be used as a source of signals and needed to be taken with a grain of salt; those who were trusted and tended to provide reliable intelligence (for example, board members or colleagues, as well as their assistants); and finally, those with whom the various streams of information could be discussed and processed — the inner circle. All of the CEOs in our sample relied heavily on such an inner circle, usually composed of selected executive team members with whom they had the most intense interactions. The CEOs used these individuals not only to obtain information, often informally (given the open-door policy that was in operation for most CEOs), but also as sounding boards to test emerging understandings and reconcile possibly competing insights. These interactions allowed the CEOs not only to connect the dots but, more importantly, to figure out which information qualified as a dot that had to be connected further.

Finally, the CEOs’ personal knowledge infrastructures included a variety of tools of the trade. These included traditional tools such as phone, email, reports and journal articles from industry magazines, as well as less traditional sources such as Twitter, blogs and other social media. Most CEOs utilized some form of electronic reporting system or audit-based dashboard that helped them track critical performance indicators, and most consulted these tools regularly, but the sophistication of the tools varied substantially. (See “Components of a Personal Knowledge Infrastructure.”)

Components of a Personal Knowledge Infrastructure

To stay on top of things, CEOs need a combination of three elements: people who feed them information and act as sounding boards; routine activities; and technologies that allow them to track performance and pick up signals. These elements constitute a CEO’s personal knowledge infrastructure. What comes to the CEO’s attention and what stays under the radar depends on the makeup of this infrastructure.
Although most CEOs had a small pile of “will read” books in their office, they rarely had time for books or magazines during the course of a working day. Personal preferences played an important role here, more so than in the two previous categories. For example, while some CEOs relied heavily on mobile phones for calls and texts, others used email almost exclusively. Contrary to our expectations, most of the CEOs dealt personally with a range of emails — this was how their work was done. Some CEOs made very little use of written documents and required short summaries. Others wanted to have complete documentation “just in case.” Some CEOs still found comfort in printed paper; few were happy to go paperless.

One critical aspect of personal knowledge infrastructures was the extent to which individual elements were often intended to support each other. For example, CEOs who liked to run large formal meetings also invested significant time in social relationships, consulting collaborators on a one-to-one basis. The CEO of an organization that was struggling with issues of trust and hidden or misplaced information was working to triangulate soft data with hard knowledge. This often required him to follow up with people individually (for example, phoning staff members directly to corroborate information or requesting documentation from outside sources) while also working to set up formal structures currently lacking.

How the Personal Knowledge Infrastructure Evolves

Although all of the CEOs relied on a personal knowledge infrastructure, how these were composed varied greatly. We saw differences across all seven CEOs in terms of whether personal knowledge infrastructures had particular elements and also the amount of emphasis each element received. Two examples illustrate how different personal knowledge infrastructures contrasted with specific leadership styles in different situations.

CEO 1: Knowing the Details in a Struggling Organization

A newly appointed CEO was running a struggling hospital-based organization that was receiving increased regulatory attention for financial reasons. His personal knowledge infrastructure was designed to help him closely monitor his organization. He ran large, often long, weekly management meetings that provided an opportunity for all team members to examine operations and share and obtain a wealth of information. After the formal meetings, conversations continued in the executive offices. The CEO spent the bulk of his time in regular meetings with local managers of health-care organizations and funding agencies, picking up signals and providing insight into the work and progress of his organization. He also spent time working on wards and visiting and talking to staff. Moreover, he cultivated a wide network of colleagues whom he often consulted in rather informal ways and maintained external links to support his key strategic tasks. He had an open-door policy, both as a symbol of change and as a permanent invitation. He felt comfortable digging through reports and documents, and he set aside time on trains to go through what he called “the train pile of documents.” Though the CEO used the phone and the Internet, he liked to attend conferences and networking events to develop a broad view of the business environment.

CEO 2: Managing Via a Mix of the Formal and Informal

The second CEO, who had been in his position for more than five years, used a very different set of practices and tools. He worked with an established team to run an organization that prided itself on its ability to achieve operational excellence and strategic growth. Throughout the day, this CEO had a series of chats with executives, which often expanded into conversations between him and several people. Indeed, much of his working day was spent in what appeared as free-form interaction: sharing information informally, only sporadically framed by discussion about an immediate problem concerning a patient or a medical concern. 

This CEO rarely attended local meetings, so the other executives were an information gateway to local strategic issues for him. However, the headquarters-based, executive-team orientation was reinforced by several other structures, relationships and tools, purposefully arranged by the CEO so he could remain in the know. First, there was an executive who the CEO felt had a very different approach from the others, which gave him another voice and view to consider. The CEO also had an IT performance system, which he consulted every morning and which allowed him to identify any serious performance issues in the organization without needing to rely on reports from executives. 

The CEO supplemented such insights with visits to wards and other areas of the hospitals late in the evening and on weekends, which allowed him to gain informal insights from veteran staff. The internal systems were supported by national-level policy work and involvement via leadership positions in sector organizations and initiatives and networking. These allowed the CEO to both formally and informally stay in the know regarding strategic issues of potential relevance, and also to influence their direction to the benefit of his organization.

What Makes It Personal

As we have seen, different CEOs use different knowledge infrastructures that reflect both what they personally need and where their organizations are at a particular point in time. In each case, the context was particularly relevant. We saw different types of knowledge infrastructures (in other words, combinations of tools, practices and relationships) in relation to seven factors:

The CEO’s Experience

More experienced CEOs often had a more defined personal style that they carried with them when they changed jobs. Some had specific practices that they tried to reactivate in the new workplace and a network of contacts that constituted the social capital they had accrued over the years; we saw this in the case of a CEO facing an operational challenge in his new organization, when he called a former colleague for advice.

Job Tenure

The more time CEOs spent in the same organization, the more they learned, often the hard way, about sources they could trust and how they could make these sources work, given their existing infrastructures and approaches to work.

Makeup of the Executive Team and Board

The composition of the top management team, how competent its members are perceived to be and how well they work as a team affected the makeup of the inner circle. CEOs often included in their inner conversational circles directors who were easy to talk to or were particularly good at collecting and relaying intelligence. However, many CEOs, like the second CEO discussed above, also saw value in having friendly “devil’s advocates” on staff who were able to present different views and act as meaningful counterweights.

Organizational Conditions and Pressures

Organizations with different financial, efficiency, quality and safety environments posed different issues for the CEOs. When the conditions changed, they had to retune their antennae accordingly. A particular challenge involved the tools and technologies available. Systems can be powerful but are costly and difficult to change. Most CEOs worked to modify and develop existing systems but often they didn’t have a lot of room to make immediate changes. They worked with what they had while instigating long-term interventions so that the system would suit them rather than the other way around.

Strategic Vision

Entering new markets or introducing new products or services required CEOs to adapt their personal knowledge infrastructure accordingly. For instance, a CEO facing a possible merger began adding M&A events to his calendar.

Economic, Competitive and Regulatory Environment

The macro environment determined whether a CEO’s personal knowledge infrastructure was appropriate. Changes in the environment forced CEOs to adapt their existing personal knowledge infrastructure and reactivate old relationships.

The Kind of Manager the CEO Wants to Be

Ultimately, the above factors were filtered through the prism of “what kind of manager I would like to be.” For instance, a CEO who valued transparency and closeness to his organization’s users established a strong presence on social media and utilized this channel to garner insights into the experience of patients and their families. This sometimes allowed him to identify problems (such as low quality of service in a particular location or low staff morale) before his managers reported them.

Taken together, the factors above suggest that effective personal knowledge infrastructures tend to be unique and personal and conform to the preferences of the manager. They need to be continually adapted, tweaked and refined in keeping with the shifting nature of the CEO’s job, the environment and new opportunities.

Although most of our CEOs were reasonably successful, everyone saw room for improvement. Indeed, a CEO’s effectiveness was a reflection of his or her situation and person-specific alignment. For example, one CEO had spent years building a sophisticated IT performance monitoring system. Another CEO didn’t see having such a system as a priority; in his view, being an effective manager entailed moving away from operational considerations and focusing more on strategic and systemwide issues. 

The challenge of changing as the organization changes was highlighted by several CEOs: The personal knowledge infrastructure that serves you well during a period of crisis and turmoil may get in the way in calmer waters. The lesson is that there is no single best personal knowledge infrastructure. Through personal reflection, managers and CEOs need to learn how to ask themselves difficult questions regarding the quality and fit of the practices, tools and relationships that they rely on to become knowledgeable. They also need to develop structured ways of asking such questions consistently and over time — rather than waiting for something to go terribly wrong.

Four Traps

The quality and fit of the CEO’s personal knowledge infrastructure is critical because it determines how he or she sees the world and defines himself or herself as a manager and CEO. It is the prism through which managers understand what is going on, and it provides the horizon of information sources through which this understanding will be probed and evaluated. However, a poorly designed personal knowledge infrastructure can lock the manager inside an information bubble and create information biases and blind spots.9 Managers may only realize this when something happens that was not on their radar or when an incident exposes the misalignment between the current demands and needs of their job and their own role. By closely examining the work practices of our CEOs over time, we identified four potential traps:

1. Not Obtaining the Information You Need

Although conventional wisdom suggests that the main problem for today’s executives is too much information, the real problem is not enough relevant information. Due to insufficient monitoring, an inappropriate mix of monitoring practices, inadequate or insufficient social relationships, and information overload, managers can find themselves without the information they need.

EVALUATING YOUR PERSONAL KNOWLEDGE INFRASTRUCTURE

2. Developing a Personal Knowledge Infrastructure That Points You in the Wrong Direction

A typical problem with personal knowledge infrastructures is that they can be poorly aligned with the demands of the job. For example, if a CEO wants to foster innovation but the infrastructure informs her about operational issues only, the CEO is likely to focus on things that aren’t of primary importance. A personal knowledge infrastructure not only reflects the rules of attention but also shapes those rules. Researchers have highlighted lessons from major spectacular failures from the past, from the Challenger space shuttle disaster to the global financial crisis.10 Many of the managers in question were completely current on the wrong information — or information about the wrong things.

3. Setting Up a Personal Knowledge Infrastructure That Is Not “You”

A manager’s personal knowledge infrastructure can clash with his management style, both in terms of what he does, the tools he uses and the type of manager he would like to be. In our study, we observed a CEO who wanted to be a manager who delegated. However, his personal knowledge infrastructure systematically drove him to focus on details, which led him to take a hands-on approach — against his best intentions. The most effective managers we observed were those who reshaped their personal knowledge infrastructure to fit their work, their management style and what they considered important.

4. Starting With Technology Rather Than Personal Need

Last, some managers make the mistake of addressing the issue from the wrong end — considering technology first rather than later. Rather than being technology-centered, personal knowledge infrastructures need to be geared toward personal development, not toward buying new technologies. Rather than asking, “Is this technology good?,” CEOs should ask, “Will it do any good for me?”

Improving Your Personal Knowledge Infrastructure

So how do managers improve their personal knowledge infrastructures? We found that although CEOs easily recognize the importance of their personal knowledge infrastructure, they very rarely pause to reflect on its effectiveness and fit. More often than not, they discover its inadequacies through comparison with others’ practices or, more commonly, following breakdowns and failures. Developing, refining and testing the effectiveness or present fit of your personal knowledge infrastructure should be routine. (See “How to Improve Your Personal Knowledge Infrastructure.”)

How to Improve Your Personal Knowledge Infrastructure

For CEOs or other executives concerned about improving their personal knowledge infrastructure, we have developed six steps designed to initiate learning and reflection. Examine the following:


There is a great deal that individuals can do for themselves. The starting point is being aware of the composition and functioning of your personal knowledge infrastructure and also being candid about its internal contradictions, potential misfits and misalignments. In our study, we found that this was best done through discussions with others: a mentor, a coach, colleagues or a trusted counselor. After all, your personal knowledge infrastructure is very much a part of you. Having a personal knowledge infrastructure in place is one thing; being honest about how well-suited it is to your particular circumstances is very different.

To this end, in addition to studying CEOs in action, we developed the outlines of a reflection and developmental process one can apply to one’s own circumstances. This is a framework to guide individual and peer reflection, built around a set of questions. (See “Evaluating Your Personal Knowledge Infrastructure”)

Being a manager in today’s complex world requires becoming information-savvy in ways that are manageable and work for you in your specific context. What we learned from the CEOs we studied also applies more broadly to executives in general. Becoming and remaining practically knowledgeable is a critical task. It is a capability that managers need to learn, develop and continually refine, and it becomes increasingly important as the manager moves through his or her career and up the corporate ladder, when the risk of information overload significantly increases.

View at the original source

This blood test can tell you every virus you’ve ever had 06-19

$
0
0

This blood test can tell you every virus you’ve ever had



A single virus particle, or "virion," of measles. (Centers for Disease Control and Prevention via Getty Images)

A single virus particle, or "virion," of measles. (Centers for Disease Control and Prevention via Getty Images)
Curious how many viruses have invaded your body over the course of your life? Now you can know.
Researchers have developed a DNA-based blood test that can determine a person's viral history, a development they hope could lead to early detection of conditions, such as hepatitis C, and eventually help explain what triggers certain autoimmune diseases and cancers.
The new test, known as VirScan, works by screening the blood for antibodies against any of the 206 species of viruses known to infect humans, according to a study published Thursday in the journal Science. The immune system, which churns out specific antibodies when it encounters a virus, can continue to produce those antibodies decades after an infection subsides. VirScan detects those antibodies and uses them as a window in time to create a blueprint of nearly every virus an individual has encountered. It's a dramatic alternative to existing diagnostic tools, which test only for a single suspected virus.
"The approach is clever and a technological tour de force," said Ian Lipkin, a professor of epidemiology and director of the Center for Infection and Immunity at Columbia University, who was not involved in the creation of VirScan. "It has the potential to reveal viruses people have encountered recently or many years earlier ... Thus, this is a powerful new research tool."
[One day, doctors might prescribe viruses instead of antibiotics]
Scientists on Thursday reported intriguing findings from their initial tests of 569 people they screened using VirScan in the United States, South Africa, Thailand and Peru. They found that the average person has been exposed to 10 of the 206 different species of known viruses -- though some people showed exposure to more than double that number.
"Many of those [people] have probably been infected with many different strains of the same virus," said Stephen Elledge, a professor of genetics and medicine at Brigham and Women's Hospital and Harvard Medical School, who led the development of VirScan. "You could be infected with many strains of rhinovirus over the course of your life, for instance, and it would show up as one hit."

In addition, he said, certain viruses were far more common in adults than in children, who presumably have yet to encounter much of the world's viral landscape. People infected with HIV tended to have antibodies against many more viruses than people without the disease. Researchers also saw striking geographic differences in the way viruses affected different populations. People in South Africa, Thailand and Peru generally displayed antibodies against many more viruses than people living in the United States.
"We don't know if this has to do with the genetics of the people or the strains of the viruses that are out there," Elledge said of the differences by country. "Or if it has something to do with cultural habits or sanitation."
[Think you're healthy? You may be carrying around viruses you don't even know about]
Elledge said the VirScan analysis currently can be performed for about $25 per blood sample, though labs might charge much more than that if the test becomes commercially available. He also said it currently takes two or three days to process and sequence about 100 samples, though that speed could increase as technology improves.
Ultimately, Elledge said he hopes the test could be used to more quickly detect conditions, such as HIV and hepatitis C, which patients can carry for years before displaying any outward symptoms. Experts believe VirScan also could lead to insights about the role long-ago viral infections play in the later development of certain cancers and autoimmune diseases such as Type 1 diabetes and multiple sclerosis.
"There are a lot of chronic diseases where we think a virus might be involved, but we can't quite pinpoint it ... Right now we can't quite make the connection," said Vincent Racaniello, a professor of microbiology and immunology at Columbia, who was not involved in developing VirScan. "I think this is really going to be helpful. It's very cool."
Racaniello said he envisions a day when patients will get the VirScan test as part of a regular checkup.
"This is going to be routine, I think," he said. "It'll be good to know what viruses have been in you."

An Interview with Dr. David Norton 06-25

$
0
0

An Interview with Dr. David Norton

In an interview with James Creelman, head of Palladium’s Knowledge and Research
Center, Palladium Chairman Dr. David Norton explains why more and more
government organizations are using tools such as the Balanced Scorecard and
Execution Premium Process™ (XPP) to effectively manage complexity in the 21st
century.

With particular reference to the military and police sectors, he explains how globalization and technology are changing the way work gets done and how this is driving government entities to adopt these tools so to better visualize and deliver to their mission and to manage inter- and intra-agency collaborations.

The Balanced Scorecard was in the right place at the right time. By the early 1990s the economic model was changing from one that was product-based to service-based. In this new economy there were requirements for a model to manage knowledge and tools for managing intangible assets. Many organizations were realizing that in this new economy measuring financial performance was still critical but that they needed a new approach to understanding the more intangible drivers of fi nancial success, and the Balanced Scorecard offered a way to do that.

It has endured because it delivered transformational results in many of the early adopters. Also, although originally a way to balance fi nancial and non-fi nancial measurement it developed into more of a management system than just a measurement tool. The adding of the Strategy Map was also an important milestone as this enabled organizationsto better visualize the strategy and what they had to do to deliver it.

The Balanced Scorecard concept is now almost 25 years old. Why has it proven to be so enduringly popular?

Since the mid-1990s the government sector has been a big user of the Balanced Scorecard, but usage has increasedsignifi cantly in recent years and across the globe. 

What has driven this uptake?

Leaders of government entities increasingly saw the Balanced Scorecard as a good idea. They had seen others succeed with its usage and decided to try it. Some of the early government successes, such as the City of Charlotte in the USA in the mid-1990s also helped to spread the message that this new way of managing could work in the  government or not-for-profi t sectors. A small number of early adopters inspired a growing number of followers. It is not unusual for any new idea to take time to trickle through and 20 years is a relatively short time.

The Balanced Scorecard is primarily a strategy implementation framework, yet many defense sector organizations have adopted it and focused more on “battle readiness.” In what important ways have defense organizations, such as Balanced Scorecard Hall of Fame™ inductees the Royal Norwegian Air Force and the US Army, tailored the Balanced Scorecard methodology for their own needs?

I would argue that “battle readiness” is a strategy. Every organization that we have worked with has a set of strategic themes that they must deliver to, rather than a one-dimensional strategy. Private sector fi rms have themes such as managing the core business, customer management, innovation, etc. The same is true for the military, which will have several themes that they must manage, such as operational effi ciency and battle readiness. The Strategy Map enables them to see those themes and how they work together.

Specifi cally related to police organizations, Abu Dhabi Police, Dubai Police, the FBI, and the Royal Canadian Mounted Police are also inductees into the Hall of Fame. What did they do well that others can learn from?

Most organizations have complex missions, but these organizations have very complex missions. The reason I say this is because to succeed to their mission they have to interface with many other organizations - success is impossible without doing so. For example, tackling the problem of drugs requires interfacing with many other agencies
such as customs or the coast guard. The Royal Canadian Mounted Police, for example, built strategic themes around pieces of their mission to drive such cooperation in areas related to drugs and gangs, in which they did not have all the knowledge required to deal with the problems on their own. The Balanced Scorecard provided these organizations
with a way to visualize and put into practice that integration and come up with a new paradigm for effective policing.

The Execution Premium framework is not just about strategy execution, but more broadly strategy management.

Why did you think it was important to expand on the original Balanced Scorecard concept?

This has been a natural evolution grounded in practical experience. Bob Kaplan and I began looking at a problem with measurement, and from that we developed the original Balanced Scorecard idea. From that we realized that the framework was most powerful when the strategic objectives were laid out as a map showing cause and effect,
and this took us to Strategy Maps. There was an evolution from how we measure to how we manage. The Balanced Scorecard also became a bridge to the management system – as examples, how we set performance objectives for
individuals and how we align investments in ways that best show the organization is delivering results. Measurement
itself does not guarantee results; for this to happen metrics have to be integrated into a broader management
system. We also realized early on the important of leadership in using the Balanced Scorecard.
This takes us to the role of leadership, which along with Bob Kaplan you have repeatedly highlighted as the critical
determinant of successful strategy execution and was deemed as such by a recent global survey by the Palladium
Group. When it comes to strategic leadership, what must organizations do right?
The success of the Balanced Scorecard is always linked to the visible usage by and buy-in of leadership. Leaders
will see it as a tool and they have lots of tools to choose from. Those leaders that get the most from a Balanced
Scorecard really use it as an agent of change, and strategy is just another word for change. I need to build effective
teams at the senior level – how do I do that? I have to get the organization to support a change of direction – how
do I do that? I need to build a high-performing culture across the globe – how do I do that? So the CEO or equivalent
sees the Balanced Scorecard as their framework for describing critical strategic goals and a tool for managing
that change.
3
© 2015 Palladium Group, Inc. | www.thepalladiumgroup.com
To do this, a good leader has to combine both right brain and left brain thinking. The right brain is unstructured and
about intuition and creativity - seeing opportunities, inspiring others, etc. The left brain is about structure – using
management tools, measuring performance, etc. Both sides of the brain are important and together deliver change.
For good reasons, defense and police organizations tend to be much more hierarchical than others in the public
and private sectors. Does this lead to any unique challenges when implementing the Balanced Scorecard or the
Execution Premium framework?
Absolutely. Strategy is horizontal in nature and not vertical. Strategy is about delivering solutions to common challenges
that the organization is facing and this is at odds with a vertical structure.

This is why a Strategy Map and in particular strategic themes are powerful within organizations with fairly rigid
hierarchies. By indentifying and laying out strategic themes on a map, these organizations are able to overlay a
horizontal form of management onto the necessary hierarchical structure. The themes enable the organizations to
more effectively drive and manage cross- and intra-organizational teamwork and pursuit of common goals.
How do you see the Balanced Scorecard/Execution Premium framework evolving over the next 3-5 years and are
there any particular implications for those organizations in the defense/police sectors?
The Balanced Scorecard and Execution Premium framework will become increasingly used to manage complexity.
And this complexity has two main drivers that are greatly impacting all fi rms and military and police agencies in
profound ways: globalization and technology.
First there’s globalization. As I have stressed, defense agencies now have to cooperate with other agencies across
the world to tackle increasingly globalized security and criminal activities: the Balanced Scorecard will help them
better manage the inherent complexities in doing so.

And then there’s technology. Obviously technology has changed the world in ways we were not able to even
comprehend a few decades ago and is further changing the world as we speak. This is having signifi cant impacts
on military and police agencies: think about how social media and video are now used to both prevent and solve
complex crimes. Technology is enabling more seamless interaction within and among government agencies across
the world and is becoming more integrated into the structures of these organizations. The need for a framework
that allows the focus on managing such complexity will become increasingly mission-critical. 

6 reasons why we’re underhyping the Internet of Things 06-25

$
0
0



6 reasons why we’re underhyping the Internet of Things






Just when you thought the Internet of Things couldn’t possibly live up to its hype, along comes a blockbuster, 142-page report from McKinsey Global Institute (“The Internet of Things: Mapping the Value Beyond the Hype”) that says, if anything, we’re underestimating the potential economic impact of the Internet of Things. By 2025, says McKinsey, the potential economic impact of having “sensors and actuators connected by networks to computing systems” (McKinsey’s definition of the Internet of Things) could be more than $11 trillion annually.
According to McKinsey, there are six reasons we may be underhyping the Internet of Things.
1. We’re only using 1 percent of all data
What McKinsey found in its analysis of more than 150 Internet of Things use cases was that we’re simply not taking advantage of all the data that sensors and RFID tags are cranking out 24/7. In some cases, says McKinsey, we may be using only 1 percent of all the data out there. And even then, we’re only using the data for simple things such as anomaly detection and control systems – we’re not taking advantage of the other 99 percent of the data for tasks such as optimization and prediction. A typical offshore oil rig, for example, may have 30,000 sensors hooked up to it, but oil companies are only using a small fraction of this data for future decision-making.
2. We’re not getting the big picture by focusing only on industries
Rather than focusing on verticals and industries (the typical way that potential economic value is computed), McKinsey takes a deeper look at the sweeping changes taking place in nine different physical “settings” where the Internet of Things will actually be deployed – home, retail, office, factories, work sites (mining, oil and gas, construction), vehicles, human (health and wellness), outside (logistics and navigation), and cities. Of that $11 trillion in economic value, four of the nine settings top out at over $1 trillion in projected economic value – factories ($3.7 trillion), cities ($1.7 trillion), health and fitness ($1.6 trillion) and retail ($1.2 trillion).
Thus, instead of focusing on, say, the automotive industry, McKinsey spreads the benefits of the Internet of Things for automobiles over two different physical settings — “vehicles” and “cities.” In the case of vehicles, sensors are a natural fit for maintenance (e.g. sensors that tell you when something’s not working on your car). In the case of cities, these sensors can help with bigger issues such as traffic congestion.
3. We’re forgetting about the B2B opportunity
If you think the Internet of Things is just about smart homes and wearable fitness devices, think again – McKinsey says the B2B market opportunity could be more than two times the size of the B2C opportunity. One big example cited by McKinsey is the ability of work sites to take better advantage of the Internet of Things.
Think of an oil work site, for example. You have machinery (e.g. oil rigs), mobile equipment (trucks), consumables (barrels of oil), employees, processing plants and transportation networks for taking this oil out of the work site. If all those elements are talking to each via the Internet, you can optimize the work site. Oil rigs can let employees know if something’s broken, trucks can arrive on time to pick up the barrels of oil, and then all that oil can be processed and shipped off to wherever it’s needed on time and on schedule.
4. We’re ignoring that “interoperability” could be the new “synergy”
According to McKinsey, approximately 40 percent of the total economic value of the Internet of Things is driven by the ability of all the physical devices to talk to each other via computers — what McKinsey refers to as “interoperability.” You can think of “interoperability” as a new form of synergy – a way to increase the whole without increasing the sum of the parts.
If machines can’t talk to each other, says McKinsey, the Internet of Things might only be a $3.9 trillion opportunity. One example of interoperability is the ability of your brand-new fitness wearable to talk with your hospital or healthcare provider. What good is your fitness device if it can’t communicate with the people who can actually use all that data? With interoperability in health, the Internet of Things may be able to cut the cost of treating chronic disease by 50 percent.
5. We’re underestimating the impact on developing economies
In terms of pure economic impact, there will be approximately a 60:40 split between economic gains for developed economies and developing economies. As McKinsey points out, some of the greatest gains will be in developing nations, especially in areas such as retail. In some cases, developing nations will be able to leapfrog the achievements in developed nations because they don’t have to worry about retrofitting equipment or infrastructure with sensors and actuators.
6. We’re forgetting about the new business models that will be created
It’s not just that the Internet of Things will lead to efficiencies and cost savings – but also that it will lead to entirely new ways of doing business. As McKinsey points out, we will likely see the rise of new business models that correspond with the way we are monitoring and evaluating data in real-time. The line will blur between technology companies and non-technology companies.
For example, take the makers of industrial equipment. Instead of selling expensive capital goods, they will sell products-as-services. Instead of charging one lump sum upfront, they will charge by usage. In addition, there will be new companies that emerge that bill themselves as end-to-end Internet of Things system providers.
**
Obviously, it’s exciting news that the world is about to get an $11 trillion economic shot in the arm from hooking up every possible object to the Internet with sensors and actuators. At the very least, some companies are going to get awfully rich by selling sensors and RFID tags to everyone trying to cash in on the Internet of Things gold rush.
At the same time, though, isn’t there something very bleak about a future in which sensors are hooked up to every object, every setting is predictable and optimized, and pure data guides every decision rather than the human heart? Imagine a giant planned economy, overseen by a bunch of managers schooled in Frederick Winslow Taylor’s principles of scientific management, figuring out new ways to crunch the data of our daily lives. When it comes to the Internet of Things, be careful what you wish for.

The Four Phases of Design Thinking 06-25

$
0
0

The Four Phases of Design Thinking


What can people in business learn from studying the ways successful designers solve problems and innovate? On the most basic level, they can learn to question, care, connect, and commit — four of the most important things successful designers do to achieve significant breakthroughs.
Having studied more than a hundred top designers in various fields over the past couple of years (while doing research for a book), I found that there were a few shared behaviors that seemed to be almost second nature to many designers. And these ingrained habits were intrinsically linked to the designer’s ability to bring original ideas into the world as successful innovations. All of which suggests that they merit a closer look.
Question. If you spend any time around designers, you quickly discover this about them: They ask, and raise, a lot of questions. Often this is the starting point in the design process, and it can have a profound influence on everything that follows. Many of the designers I studied, from Bruce Mau to Richard Saul Wurman to Paula Scher, talked about the importance of asking “stupid questions”–the ones that challenge the existing realities and assumptions in a given industry or sector. The persistent tendency of designers to do this is captured in the joke designers tell about themselves. How many designers does it take to change a light bulb? Answer: Does it have to be a light bulb?
In a business setting, asking basic “why” questions can make the questioner seem naïve while putting others on the defensive (as in, “What do you mean ‘Why are we doing it this way?’ We’ve been doing it this way for 22 years!”). But by encouraging people to step back and reconsider old problems or entrenched practices, the designer can begin to re-frame the challenge at hand — which can then steer thinking in new directions. For business in today’s volatile marketplace, the ability to question and rethink basic fundamentals — What business are we really in? What do today’s consumers actually need or expect from us? — has never been more important.
Care. It’s easy for companies to say they care about customer needs. But to really empathize, you have to be willing to do what many of the best designers do: step out of the corporate bubble and actually immerse yourself in the daily lives of people you’re trying to serve. What impressed me about design researchers such as Jane Fulton Suri of IDEO was the dedication to really observing and paying close attention to people — because this is usually the best way to ferret out their deep, unarticulated needs. Focus groups and questionnaires don’t cut it; designers know that you must care enough to actually be present in people’s lives.
Connect. Designers, I discovered, have a knack for synthesizing–for taking existing elements or ideas and mashing them together in fresh new ways. This can be a valuable shortcut to innovation because it means you don’t necessarily have to invent from scratch. By coming up with “smart recombinations” (to use a term coined by the designer John Thackara), Apple has produced some of its most successful hybrid products; and Nike smartly combining a running shoe with an iPod to produce its groundbreaking Nike Plus line (which enables users to program their runs). It isn’t easy to come up with these great combos. Designers know that you must “think laterally” — searching far and wide for ideas and influences — and must also be willing to try connecting ideas that might not seem to go together. This is a way of thinking that can also be embraced by non-designers.
Commit. It’s one thing to dream up original ideas. But designers quickly take those ideas beyond the realm of imagination by giving form to them. Whether it’s a napkin sketch, a prototype carved from foam rubber, or a digital mock-up, the quick-and-rough models that designers constantly create are a critical component of innovation — because when you give form to an idea, you begin to make it real.
But it’s also true that when you commit to an idea early — putting it out into the world while it’s still young and imperfect — you increase the possibility of short-term failure. Designers tend to be much more comfortable with this risk than most of us. They know that innovation often involves an iterative process with setbacks along the way — and those small failures are actually useful because they show the designer what works and what needs fixing. The designer’s ability to “fail forward” is a particularly valuable quality in times of dynamic change. Today, many companies find themselves operating in a test-and-learn business environment that requires rapid prototyping. Which is just one more reason to pay attention to the people who’ve been conducting their work this way all along.

Design Thinking 06-26

$
0
0

Design Thinking


Thomas Edison created the electric lightbulb and then wrapped an entire industry around it. The lightbulb is most often thought of as his signature invention, but Edison understood that the bulb was little more than a parlor trick without a system of electric power generation and transmission to make it truly useful. So he created that, too.

Thus Edison’s genius lay in his ability to conceive of a fully developed marketplace, not simply a discrete device. He was able to envision how people would want to use what he made, and he engineered toward that insight. He wasn’t always prescient (he originally believed the phonograph would be used mainly as a business machine for recording and replaying dictation), but he invariably gave great consideration to users’ needs and preferences.
Edison’s approach was an early example of what is now called “design thinking”—a methodology that imbues the full spectrum of innovation activities with a human-centered design ethos. By this I mean that innovation is powered by a thorough understanding, through direct observation, of what people want and need in their lives and what they like or dislike about the way particular products are made, packaged, marketed, sold, and supported.
Many people believe that Edison’s greatest invention was the modern R&D laboratory and methods of experimental investigation. Edison wasn’t a narrowly specialized scientist but a broad generalist with a shrewd business sense. In his Menlo Park, New Jersey, laboratory he surrounded himself with gifted tinkerers, improvisers, and experimenters. 
Indeed, he broke the mold of the “lone genius inventor” by creating a team-based approach to innovation. Although Edison biographers write of the camaraderie enjoyed by this merry band, the process also featured endless rounds of trial and error—the “99% perspiration” in Edison’s famous definition of genius. His approach was intended not to validate preconceived hypotheses but to help experimenters learn something new from each iterative stab. Innovation is hard work; Edison made it a profession that blended art, craft, science, business savvy, and an astute understanding of customers and markets.
Design thinking is a lineal descendant of that tradition. Put simply, it is a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity. Like Edison’s painstaking innovation process, it often entails a great deal of perspiration.
I believe that design thinking has much to offer a business world in which most management ideas and best practices are freely available to be copied and exploited. Leaders now look to innovation as a principal source of differentiation and competitive advantage; they would do well to incorporate design thinking into all phases of the process.

Getting Beneath the Surface

Historically, design has been treated as a downstream step in the development process—the point where designers, who have played no earlier role in the substantive work of innovation, come along and put a beautiful wrapper around the idea. To be sure, this approach has stimulated market growth in many areas by making new products and technologies aesthetically attractive and therefore more desirable to consumers or by enhancing brand perception through smart, evocative advertising and communication strategies. During the latter half of the twentieth century design became an increasingly valuable competitive asset in, for example, the consumer electronics, automotive, and consumer packaged goods industries. But in most others it remained a late-stage add-on.
Now, however, rather than asking designers to make an already developed idea more attractive to consumers, companies are asking them to create ideas that better meet consumers’ needs and desires. The former role is tactical, and results in limited value creation; the latter is strategic, and leads to dramatic new forms of value.
Moreover, as economies in the developed world shift from industrial manufacturing to knowledge work and service delivery, innovation’s terrain is expanding. Its objectives are no longer just physical products; they are new sorts of processes, services, IT-powered interactions, entertainments, and ways of communicating and collaborating—exactly the kinds of human-centered activities in which design thinking can make a decisive difference. (See the sidebar “A Design Thinker’s Personality Profile.”)
Consider the large health care provider Kaiser Permanente, which sought to improve the overall quality of both patients’ and medical practitioners’ experiences. Businesses in the service sector can often make significant innovations on the front lines of service creation and delivery. By teaching design thinking techniques to nurses, doctors, and administrators, Kaiser hoped to inspire its practitioners to contribute new ideas. Over the course of several months Kaiser teams participated in workshops with the help of my firm, IDEO, and a group of Kaiser coaches. These workshops led to a portfolio of innovations, many of which are being rolled out across the company.
One of them—a project to reengineer nursing-staff shift changes at four Kaiser hospitals—perfectly illustrates both the broader nature of innovation “products” and the value of a holistic design approach. The core project team included a strategist (formerly a nurse), an organizational-development specialist, a technology expert, a process designer, a union representative, and designers from IDEO. This group worked with innovation teams of frontline practitioners in each of the four hospitals.
During the earliest phase of the project, the core team collaborated with nurses to identify a number of problems in the way shift changes occurred. Chief among these was the fact that nurses routinely spent the first 45 minutes of each shift at the nurses’ station debriefing the departing shift about the status of patients. 
Their methods of information exchange were different in every hospital, ranging from recorded dictation to face-to-face conversations. And they compiled the information they needed to serve patients in a variety of ways—scrawling quick notes on the back of any available scrap of paper, for example, or even on their scrubs. 
Despite a significant investment of time, the nurses often failed to learn some of the things that mattered most to patients, such as how they had fared during the previous shift, which family members were with them, and whether or not certain tests or therapies had been administered. For many patients, the team learned, each shift change felt like a hole in their care. Using the insights gleaned from observing these important times of transition, the innovation teams explored potential solutions through brainstorming and rapid prototyping. (Prototypes of a service innovation will of course not be physical, but they must be tangible. Because pictures help us understand what is learned through prototyping, we often videotape the performance of 
Prototyping doesn’t have to be complex and expensive. In another health care project, IDEO helped a group of surgeons develop a new device for sinus surgery. As the surgeons described the ideal physical characteristics of the instrument, one of the designers grabbed a whiteboard marker, a film canister, and a clothespin and taped them together. “Do you mean like this?” he asked. With his rudimentary prototype in hand, the surgeons were able to be much more precise about what the ultimate design should accomplish.
The surgeons described a new device for sinus surgery. One designer grabbed a marker, a film canister, and a clothespin and taped them together. “Do you mean like this?” he asked.
Prototypes should command only as much time, effort, and investment as are needed to generate useful feedback and evolve an idea. The more “finished” a prototype seems, the less likely its creators will be to pay attention to and profit from feedback. The goal of prototyping isn’t to finish. It is to learn about the strengths and weaknesses of the idea and to identify new directions that further prototypes might take.
The design that emerged for shift changes had nurses passing on information in front of the patient rather than at the nurses’ station. In only a week the team built a working prototype that included new procedures and some simple software with which nurses could call up previous shift-change notes and add new ones. They could input patient information throughout a shift rather than scrambling at the end to pass it on. The software collated the data in a simple format customized for each nurse at the start of a shift. The result was both higher-quality knowledge transfer and reduced prep time, permitting much earlier and better-informed contact with patients.
As Kaiser measured the impact of this change over time, it learned that the mean interval between a nurse’s arrival and first interaction with a patient had been more than halved, adding a huge amount of nursing time across the four hospitals. Perhaps just as important was the effect on the quality of the nurses’ work experience. One nurse commented, “I’m an hour ahead, and I’ve only been here 45 minutes.” Another said, “[This is the] first time I’ve ever made it out of here at the end of my shift.”
Thus did a group of nurses significantly improve their patients’ experience while also improving their own job satisfaction and productivity. By applying a human-centered design methodology, they were able to create a relatively small process innovation that produced an outsize impact. The new shift changes are being rolled out across the Kaiser system, and the capacity to reliably record critical patient information is being integrated into an electronic medical records initiative at the company.
What might happen at Kaiser if every nurse, doctor, and administrator in every hospital felt empowered to tackle problems the way this group did? To find out, Kaiser has created the Garfield Innovation Center, which is run by Kaiser’s original core team and acts as a consultancy to the entire organization. The center’s mission is to pursue innovation that enhances the patient experience and, more broadly, to envision Kaiser’s “hospital of the future.” It is introducing tools for design thinking across the Kaiser system.

How Design Thinking Happens

The myth of creative genius is resilient: We believe that great ideas pop fully formed out of brilliant minds, in feats of imagination well beyond the abilities of mere mortals. But what the Kaiser nursing team accomplished was neither a sudden breakthrough nor the lightning strike of genius; it was the result of hard work augmented by a creative human-centered discovery process and followed by iterative cycles of prototyping, testing, and refinement.
The design process is best described metaphorically as a system of spaces rather than a predefined series of orderly steps. The spaces demarcate different sorts of related activities that together form the continuum of innovation. Design thinking can feel chaotic to those experiencing it for the first time. But over the life of a project participants come to see—as they did at Kaiser—that the process makes sense and achieves results, even though its architecture differs from the linear, milestone-based processes typical of other kinds of business activities.
Design projects must ultimately pass through three spaces (see the exhibit “Inspiration, Ideation, Implementation”). We label these “inspiration,” for the circumstances (be they a problem, an opportunity, or both) that motivate the search for solutions; “ideation,” for the process of generating, developing, and testing ideas that may lead to solutions; and “implementation,” for the charting of a path to market. Projects will loop back through these spaces—particularly the first two—more than once as ideas are refined and new directions taken.
Inspiration, Ideation, Implementation
Sometimes the trigger for a project is leadership’s recognition of a serious change in business fortunes. In 2004 Shimano, a Japanese manufacturer of bicycle components, faced flattening growth in its traditional high-end road-racing and mountain-bike segments in the United States. The company had always relied on technology innovations to drive its growth and naturally tried to predict where the next one might come from. This time Shimano thought a high-end casual bike that appealed to boomers would be an interesting area to explore. IDEO was invited to collaborate on the project.
During the inspiration phase, an interdisciplinary team of IDEO and Shimano people—designers, behavioral scientists, marketers, and engineers—worked to identify appropriate constraints for the project. The team began with a hunch that it should focus more broadly than on the high-end market, which might prove to be neither the only nor even the best source of new growth. So it set out to learn why 90% of American adults don’t ride bikes. Looking for new ways to think about the problem, the team members spent time with all kinds of consumers. 
They discovered that nearly everyone they met rode a bike as a child and had happy memories of doing so. They also discovered that many Americans are intimidated by cycling today—by the retail experience (including the young, Lycra-clad athletes who serve as sales staff in most independent bike stores); by the complexity and cost of the bikes, accessories, and specialized clothing; by the danger of cycling on roads not designed for bicycles; and by the demands of maintaining a technically sophisticated bike that is ridden infrequently.
The design team, responsible for every aspect of what was envisioned as a holistic experience, came up with the concept of “Coasting.” Coasting would aim to entice lapsed bikers into an activity that was simple, straightforward, and fun. Coasting bikes, built more for pleasure than for sport, would have no controls on the handlebars, no cables snaking along the frame. As on the earliest bikes many of us rode, the brakes would be applied by backpedaling. With the help of an onboard computer, a minimalist three gears would shift automatically as the bicycle gained speed or slowed. The bikes would feature comfortably padded seats, be easy to operate, and require relatively little maintenance.
This human-centered exploration—which took its insights from people outside Shimano’s core customer base—led to the realization that a whole new category of bicycling might be able to reconnect American consumers to their experiences as children while also dealing with the root causes of their feelings of intimidation—thus revealing a large untapped market.
Three major manufacturers—Trek, Raleigh, and Giant—developed new bikes incorporating innovative components from Shimano. But the design team didn’t stop with the bike itself. In-store retailing strategies were created for independent bike dealers, in part to alleviate the discomfort that biking novices felt in stores designed to serve enthusiasts. The team developed a brand that identified Coasting as a way to enjoy life. (“Chill. Explore. Dawdle. Lollygag. First one there’s a rotten egg.”) And it designed a public relations campaign—in collaboration with local governments and cycling organizations—that identified safe places to ride.
Although many others became involved in the project when it reached the implementation phase, the application of design thinking in the earliest stages of innovation is what led to this complete solution. Indeed, the single thing one would have expected the design team to be responsible for—the look of the bikes—was intentionally deferred to later in the development process, when the team created a reference design to inspire the bike companies’ own design teams. After a successful launch in 2007, seven more bicycle manufacturers signed up to produce Coasting bikes in 2008.

Taking a Systems View

Many of the world’s most successful brands create breakthrough ideas that are inspired by a deep understanding of consumers’ lives and use the principles of design to innovate and build value. Sometimes innovation has to account for vast differences in cultural and socioeconomic conditions. In such cases design thinking can suggest creative alternatives to the assumptions made in developed societies.
India’s Aravind Eye Care System is probably the world’s largest provider of eye care. From April 2006 to March 2007 Aravind served more than 2.3 million patients and performed more than 270,000 surgeries. Founded in 1976 by Dr. G. Venkataswamy, Aravind has as its mission nothing less than the eradication of needless blindness among India’s population, including the rural poor, through the effective delivery of superior ophthalmic care. (One of the company’s slogans is “Quality is for everyone.”) From 11 beds in Dr. Venkataswamy’s home, Aravind has grown to encompass five hospitals (three others are under Aravind management), a plant that manufactures ophthalmic products, a research foundation, and a training center.
Aravind’s execution of its mission and model is in some respects reminiscent of Edison’s holistic concept of electric power delivery. The challenge the company faces is logistic: how best to deliver eye care to populations far removed from the urban centers where Aravind’s hospitals are located. Aravind calls itself an “eye care system” for a reason: Its business goes beyond ophthalmic care per se to transmit expert practice to populations that have historically lacked access. The company saw its network of hospitals as a beginning rather than an end.
Much of its innovative energy has focused on bringing both preventive care and diagnostic screening to the countryside. Since 1990 Aravind has held “eye camps” in India’s rural areas, in an effort to register patients, administer eye exams, teach eye care, and identify people who may require surgery or advanced diagnostic services or who have conditions that warrant monitoring.
In 2006 and early 2007 Aravind eye camps screened more than 500,000 patients, of whom nearly 113,000 required surgery. Access to transportation is a common problem in rural areas, so the company provides buses that take patients needing further treatment to one of its urban facilities and then home again. Over the years it has bolstered its diagnostic capabilities in the field with telemedicine trucks, which enable doctors back at Aravind’s hospitals to participate in care decisions. In recent years Aravind’s analysis of its screening data has led to specialized eye camps for certain demographic groups, such as school-age children and industrial and government workers; the company also holds camps specifically to screen for eye diseases associated with diabetes. All these services are free for the roughly 60% of patients who cannot afford to pay.
In developing its system of care, Aravind has consistently exhibited many characteristics of design thinking. It has used as a creative springboard two constraints: the poverty and remoteness of its clientele and its own lack of access to expensive solutions. For example, a pair of intraocular lenses made in the West costs $200, which severely limited the number of patients Aravind could help. Rather than try to persuade suppliers to change the way they did things, Aravind built its own solution: a manufacturing plant in the basement of one of its hospitals. It eventually discovered that it could use relatively inexpensive technology to produce lenses for $4 a pair.
Throughout its history—defined by the constraints of poverty, ignorance, and an enormous unmet need—Aravind has built a systemic solution to a complex social and medical problem.

Getting Back to the Surface

I argued earlier that design thinking can lead to innovation that goes beyond aesthetics, but that doesn’t mean that form and aesthetics are unimportant. Magazines like to publish photographs of the newest, coolest products for a reason: They are sexy and appeal to our emotions. Great design satisfies both our needs and our desires. Often the emotional connection to a product or an image is what engages us in the first place. Time and again we see successful products that were not necessarily the first to market but were the first to appeal to us emotionallyand functionally. In other words, they do the job and we love them. The iPod was not the first MP3 player, but it was the first to be delightful. Target’s products appeal emotionally through design and functionally through price—simultaneously.
This idea will grow ever more important in the future. As Daniel Pink writes in his book A Whole New Mind, “Abundance has satisfied, and even over-satisfied, the material needs of millions—boosting the significance of beauty and emotion and accelerating individuals’ search for meaning.” As more of our basic needs are met, we increasingly expect sophisticated experiences that are emotionally satisfying and meaningful. These experiences will not be simple products. They will be complex combinations of products, services, spaces, and information. They will be the ways we get educated, the ways we are entertained, the ways we stay healthy, the ways we share and communicate. Design thinking is a tool for imagining these experiences as well as giving them a desirable form.
One example of experiential innovation comes from a financial services company. In late 2005 Bank of America launched a new savings account service called “Keep the Change.” IDEO, working with a team from the bank, helped identify a consumer behavior that many people will recognize: After paying cash for something, we put the coins we received in change into a jar at home. Once the jar is full, we take the coins to the bank and deposit them in a savings account. For many people, it’s an easy way of saving. Bank of America’s innovation was to build this behavior into a debit card account. Customers who use their debit cards to make purchases can now choose to have the total rounded up to the nearest dollar and the difference deposited in their savings accounts.
The success of this innovation lay in its appeal to an instinctive desire we have to put money aside in a painless and invisible way. Keep the Change creates an experience that feels natural because it models behavior that many of us already exhibit. To be sure, Bank of America sweetens the deal by matching 100% of the change saved in the first three months and 5% of annual totals (up to $250) thereafter. This encourages customers to try it out. But the real payoff is emotional: the gratification that comes with monthly statements showing customers they’ve saved money without even trying.
In less than a year the program attracted 2.5 million customers. It is credited with 700,000 new checking accounts and a million new savings accounts. Enrollment now totals more than 5 million people who together have saved more than $500 million. Keep the Change demonstrates that design thinking can identify an aspect of human behavior and then convert it into both a customer benefit and a business value.
Thomas Edison represents what many of us think of as a golden age of American innovation—a time when new ideas transformed every aspect of our lives. The need for transformation is, if anything, greater now than ever before. No matter where we look, we see problems that can be solved only through innovation: unaffordable or unavailable health care, billions of people trying to live on just a few dollars a day, energy usage that outpaces the planet’s ability to support it, education systems that fail many students, companies whose traditional markets are disrupted by new technologies or demographic shifts. These problems all have people at their heart. They require a human-centered, creative, iterative, and practical approach to finding the best ideas and ultimate solutions. Design thinking is just such an approach to innovation.

WHEN HEALTH CARE GETS A HEALTHY DOSE OF DATA 06-28

$
0
0
WHEN HEALTH CARE GETS A HEALTHY DOSE OF DATA

How Intermountain Healthcare is using data and analytics to transform patient care




American health care is undergoing a data-driven transformation — and Intermountain Healthcare is leading the way. This MIT Sloan Management Review case study examines the data and analytics culture at Intermountain, a Utah-based company that runs 22 hospitals and 185 clinics. Data-driven decision making has improved patient outcomes in Intermountain's cardiovascular medicine, endocrinology, surgery, obstetrics and care processes — while saving millions of dollars in procurement and in its the supply chain. The case study includes video clips of interviews and a downloadable PDF version.

CHAPTER 1

Introduction

The views of Utah’s Wasatch Mountains are spectacular from the east side of Intermountain Medical Center, but as 40-year-old Lee Pierce walked down a hallway on the fifth floor of the hospital’s administrative building, he hardly noticed them. Pierce, Intermountain’s chief data officer (CDO), was more focused on the giant countdown clock the implementation team had put up in the corridor. The clock was approaching zero, which marked the moment in February 2015 when Intermountain Healthcare would switch on its new electronic health records (EHR) system in two of its 22 hospitals and 24 of its 185 clinics.

Pierce was hardly the only health care executive concerned about a major EHR installation. Indeed, a year earlier, a key provision of the American Recovery and Reinvestment Act of 20091 went into effect, mandating that all health care providers adopt and demonstrate “meaningful use” of EHR systems to maintain their Medicaid and Medicare reimbursement levels.2 But while others scrambled to meet the deadline, Intermountain executives were thinking past it — because Intermountain was replacing an EHR system, not installing its first one.

In fact, Intermountain had created its own EHR system in the 1970s, helping the not-for-profit hospital develop a reputation as an innovator in evidence-based medicine. But that system had aged: It had become incompatible with new forms of input, like speech and data from wearable devices, and it was cumbersome and challenging for the nurses and physicians using it to navigate the antiquated interface to document and retrieve patient information.

Over the years, clinicians had learned to work with the system. It was part of a concerted effort to bring data-based insights to clinicians and managers across the Intermountain Healthcare organization. All clinical programs had embedded analytics support teams; procurement decisions were heavily influenced by data and analytics; and patient interactions were continuously enhanced by data, from the application of population health analytics to analyses of patient self-reports. A culture of data use was widespread among Intermountain’s clinicians and managers.

Even so, the switch to a new EHR system was expected to challenge Intermountain on two fronts: one technological, the other organizational. This was Intermountain’s second effort to update the technology behind its EHR system. An earlier attempt had been abandoned in 2012. Executives pulled the plug on a six-year overhaul involving tens of millions of dollars after deciding the technology was not going to work. This time, Intermountain leaders, including Pierce, were confident they had the right technology and the right systems in place to move the data and information where it needed to go.

There were concerns, however, about whether physicians were ready and willing to make a speedy transition to the new system. They had had only occasional interaction with the old system used in the hospitals, which meant, on the one hand, that physicians were unfamiliar with its interface, and on the other, that they would have to integrate technology into their approach to patient care in new ways.

Two months later, Pierce was at the Las Vegas airport returning from a data and analytics conference, standing with one of the Intermountain physicians working on the rollout. “You know, they said we would be up and running and as efficient as before in just a couple of weeks,” the physician commented. “Here we are, a couple of months in, and some people are still not there. We should have set the expectation that it will take a few weeks to months, depending on the physician’s comfort using technology, the complexity of individual workflows, and frequency of use.”

About Intermountain Healthcare

Intermountain Healthcare runs 22 hospitals and 185 clinics in Utah and Idaho. It employs more than 800 physicians. In 2014, it performed 150,000 surgeries and had 488,000 emergency room visits. It grew out of a system of 15 hospitals operated by the Church of Jesus Christ of Latter-day Saints, which donated the hospitals to their communities in 1975. Intermountain was formed as a secular operating company to oversee those hospitals. It also operates an insurer, SelectHealth, which had 750,000 members and $1.83 billion in revenues in 2014. Overall, in 2014 Intermountain Healthcare had $5.57 billion in revenues and an operating surplus of $301 million.

Pioneering Health Care Analytics

Computers barely existed when Intermountain began its quest to incorporate data analytics into its health care practices. In the 1950s, a cardiologist named Homer Warner joined one of the hospitals that eventually became part of the Intermountain Healthcare organization. Shortly thereafter, he began gathering data to understand why some heart patients had better outcomes than others. Warner would become known as the father of medical informatics — the use of computer programs to analyze patient data to determine treatment protocols — after he and some colleagues built a decision-support tool in 1968 called HELP (Health Evaluation through Logical Processing).3 HELP was one of the first EHR systems in the United States, and it provided doctors with diagnostic advice and treatment guidance. It was also effective in helping doctors identify the causes of adverse drug reactions.

Years later, Warner recalled that using computers to model diagnoses was not — at first — well received; some cardiologists were even insulted by claims that a computer could make a diagnosis. Despite the resistance, the system’s benefits began showing up in improved patient outcomes, and HELP became a key component in Intermountain’s approach to patient care. The innovation attracted attention from all over the world.4 In 1985, Intermountain began using the HELP system in all of its hospitals. Administrators saw an opportunity to put data-driven decision making at the forefront of the organization.

But it wasn’t easy.

Delivering an Analytics Culture

Over the next dozen years, Intermountain expanded its use of data-driven decision-support tools. In 1986, Intermountain hired Brent James, a physician with a master’s degree in statistics, to champion quality-improvement principles and initiatives across the organization. One early challenge was that expensive information technologies, such as data storage, were still improving, making the premise that large investments in data technology would improve care and lower costs somewhat risky. “It was really a decision made on faith at first, that if we invested in the systems, we would see results,” says Brent Wallace, chief medical officer (CMO) for the organization.

James focused on improving data quality and data-gathering techniques. As Mark Ott, chief of surgery at Intermountain, says, “I never want to give data to doctors that I can’t defend. Because once you’ve got bad data, it takes months to recover that level of trust. The single most important thing is the integrity of the data.” James adds that there needs to be a constant focus on data gathering, painstakingly mundane work that almost no one takes to naturally. “You have to have a data zealot who goes around and grabs teams and pulls them into line,” James says.

Helping physicians become comfortable with data became an important part of Intermountain’s approach to developing a data-oriented culture. A key facet of this approach was being as transparent as possible about data quality, CMO Wallace recalls:

When we first started presenting data to physicians about their own performance and how they were doing, most physicians, especially if they were not performing as well as they feel like they ought to be, have two comments. One is, “Well, the data really aren’t accurate. There are problems with the data.” And the second is, “I have sicker patients than my colleagues.” And you hear those two things over and over again.

We allow and actually encourage physicians to question the integrity of the data. If it’s a dataset around their own performance, we show them the names of the patients from whom the data was derived, and they can look at it and say, “Well, this isn’t my patient. This one really sees my partner.” And then we’ll take it out of their dataset. Or if they look at it and say, “You know, I just really don’t believe that this case costs this much money. I want to get in and see what were the contributing factors and challenge that. Have we really collected that accurately?”

And over time, many of our physicians who have been involved in this process iteratively have become pretty comfortable that the data we provide are accurate and okay. But they still know they have the capability to challenge it, if that is needed.

Intermountain's team-driven culture applies gentle peer pressure, extolling doctors or teams that have excellent results and encouraging others to take the same steps. Administrators in the surgical unit, for instance, show physicians how they are performing relative to their peers because they believe surgeons are competitive and want their names at the top of the board. This collegial approach comes in part because only a third of the company's doctors work directly for Intermountain. Another third work for affiliated medical practices, and the rest are independent and only occasionally interact with Intermountain. The system needs them all to contribute data that is as complete as possible, so that data quality doesn’t degrade.

In 1999, at the height of the Internet boom, Intermountain experienced something of an organizational epiphany when it discovered the power of data analytics to affect population health. That year, the American College of Obstetricians and Gynecologists recommended that doctors stop choosing to induce labor before the 39th week of pregnancy, because medical research showed that early induction carried significant risks for babies and mothers.

The hospital’s labor-and-delivery committee suggested that doctors should investigate the hospital’s elective induction rate. “We don’t have that problem here,” came the response from a majority of the obstetricians. The data said otherwise. In fact, 28% of Intermountain’s deliveries were elective preterm inductions, on par with the national average. Intermountain urged its doctors to think twice about performing them, but moving away from elective inductions was a bumpy process. With most deliveries’ timing now left to Mother Nature, many obstetricians had to get used to being on call again or working at odd hours. But eventually they accepted the changes in procedure, and by 2001, elective preterm inductions had fallen to less than 2% of all cases.

Hard work followed this organizational epiphany, as the organization spent years creating a common language for data across departments and hospitals. Colleen Roberts, who switched from being a nurse to a data manager in 2002 after earning a master’s degree in medical informatics, began building out data dictionaries. “Everybody knew that Emergent meant this, and Urgent meant this, but there weren’t clear definitions for every data element,” says Roberts. It took regular meetings with practicing clinicians to hammer out definitions that ultimately enabled Intermountain, for the first time, to directly compare hospitals and departments on a wide range of metrics. Over the last decade, the use of data has become completely ingrained in the culture, she says.

Today, “we never do a project or care initiative that we don’t first run baseline data to see where we were. And post implementation, we run data to see if we’ve shown improvement,” says Roberts, now director of operations for Intermountain’s cardiovascular clinical care unit.

As data analytics spread among Intermountain’s clinical care settings during the 2000s, the cost of gathering and storing data decreased rapidly, enabling more access to analytics. But the main reason analytics spread was not the cost of the technology but the results, how good the analytics were at helping patients.

CHAPTER 2

An Appointment With Clinical Programs

Intermountain has set up multiple touch points for clinicians to access the data they need, or the data they want. Most of its 10 clinical programs, whether big ones like women’s and newborn and cardiovascular, or small specialty services like ear, nose, and throat, have their own data team, as does the clinical services group (pharmacy, imaging and radiology, nursing, physical therapy). Each data team consists of three people: a data manager who makes sure data is being collected correctly, a data analyst to flag important trends, and a data architect who pulls together data from various sources inside and outside Intermountain. 

The data manager and data analyst are embedded in the clinical team’s staff and report to the clinical program’s operations manager. The data architects are based in a centralized IT department and report to managers who report to CDO Pierce. In addition, Intermountain has 240 data analysts spread throughout its facilities, as well as 70 researchers in the Homer Warner Center for Informatics Research, formed in 2011. A few of those report into Pierce’s group; the rest are involved in research projects.

In addition, the clinical programs’ operations directors spend part of their time ensuring that data is being gathered properly on the clinical side. There are even data abstracters — nurses assigned to gather data in the operating rooms and other locations — in part because Intermountain participates in a variety of national programs where hospitals contribute information on various procedures, which can require collecting more than a thousand points of data for some procedures.

Any Intermountain employee can make formal or informal requests for analytics support. Pierce notes that with 240 data analysts spread throughout the organization, many requests are made informally. They're water-cooler conversations or brief email exchanges along the lines of, “What does the data say about this kind of treatment?” Intermountain encourages this informal activity, though its analysts must make formally approved queries a priority.

Formal requests for analytics are processed through the internal Web portal. These requests include estimates of the likely time needed from data analysts, managers, and architects. If the combined time for the request is projected to exceed 40 hours, it must be approved and given a priority assessment at the monthly meeting of an information management council, chaired by Pierce, which handles analytics and data governance.

Cardiovascular

The cardiovascular practice, where Warner started the use of analytics, has expanded its use of analytics to support patient care not only through decision making but also at the policy level. Intermountain used data to decide that it should, for instance, have only four of its hospitals perform cardiovascular operations (surgeries and catheterizations), because concentrating procedure volumes and maintaining implicit controls over conditions was the best way to improve care and reduce costs. By concentrating expertise at each of the four hospitals, Intermountain increased response times for certain emergency procedures, for which speedy interventions are closely connected to better health outcomes.

For example, on average about 15% of people who suffer ST-elevation myocardial infarctions (STEMI) — heart attacks that occur when coronary arteries suddenly become completely blocked — die within 30 days. Better outcomes are achieved when patients receive rapid intervention to unblock the artery. The national standard is 90 minutes for what’s called “door-to-balloon time,” which represents the amount of time from the moment the patient enters the hospital to relief of the blockage via a balloon inflated within the blocked artery. Beating that national average of door-to-balloon time would mean more lives saved.

To work toward that goal, in 2011 Intermountain’s cardiology leadership began working with STEMI teams to set internal time standards and measure results. Every time a heart attack patient was treated, the data on the operation was circulated to the whole team within a few days, a process known as rapid process improvement. This feedback loop helped Intermountain reduce the median door-to-balloon time to 57 minutes. In the last three years, all STEMI patients at Intermountain have gone door-to-balloon in less than 90 minutes. Intermountain’s rate of STEMI patient survival beyond 30 days is now at 96%. “That was purely data-driven — and without the data, we’d have no clue what was going on,” says Don Lappé, the chief of cardiology at Intermountain.

Another example: The cardiovascular surgical team evaluated published research findings that suggested that blood sugar management helped heart patients after operations. Since surgery and anesthesia increase stress levels, which can cause spikes in blood sugar levels, the team asked their data analyst to build a query to examine average blood sugar levels before, during, and after surgery.5

The analysis showed that patient blood sugar levels reached between 300 and 400 mg/dL on average, which was well above the average values of around 90 to 160 mg/dL. A related query showed that Intermountain patients who went home without having their blood sugar managed had more health issues, including needing to be readmitted to a hospital, than those who received blood sugar management.

The cardiovascular surgical team evaluated research on the question with representatives from Intermountain’s four open-heart surgery programs and asked them to think about how to manage blood sugar levels. One hospital started testing blood sugar levels when patients were admitted, and put patients with high baseline blood glucose levels — even those who weren’t diabetic — on insulin. An anesthesiologist at one hospital devised a procedure where he would infuse patients with glucose and then adjust their insulin levels; he found that this caused patient blood sugar levels to fall below 200. He shared the results with his colleagues, who adopted the same techniques. The result from these efforts was a 50% drop in deaths after heart surgery as well as a reduction in time in intensive care units and shorter overall stays.

Endocrinology

In 2014, Intermountain published its analysis of diabetics and angiograms in The Journal of the American Medical Association (JAMA).6 JAMA also published a commentary from a doctor at the Mayo Clinic arguing that what was really happening is that Intermountain does such a good job caring for diabetics that they face no higher risk of heart disease than the general population.7

In fact, information sharing has played an important role in how Intermountain providers manage blood sugar levels within their population of diabetic patients. The endocrinology data team analyzed which diabetic patients from across the entire Intermountain group had the lowest average blood sugar levels based on scores from a routine lab test. The practice team took this data and asked the doctors whose patients had the best scores what they had done to help their patients maintain their low levels. 

Answers varied from using motivational tools to having their assistants call the patient every three months. The analysis gave all of the doctors with diabetes patients a way to connect their patients with the data by showing patients their scores and correlating scores with lifestyles. By doing so, the doctors were taking patient care, and analytics, outside the hospital.

Orthopedics

If there has been a data holdout at Intermountain, it is orthopedics. It is effectively a self-contained department, in that orthopedic surgeries are usually one-time events, handled within an orthopedics group without a lot of patient follow-up except for physical therapy visits. The orthopedics practice does track short-term complications from procedures, such as infection rates, patient time out of work, and how many patients need to return to the operating room. There is a system used to collect physical therapy outcomes. The data from that system suggests that some orthopedists’ patients seem to recover more quickly, but data does not measure patients’ progress over time. Data doesn’t show, for instance, if full knee replacements create better long-term results than partial knee replacements.

Intermountain is evaluating different tools it can use to start to collect that data and use information to better analyze the impact of orthopedics on patient lives. “What I’d love to see is when the patient hits our system, wherever it is, a flag goes up and says ‘it’s been a year since this person had a knee replacement; fill out the survey and give us some follow-up,’” says CMO Wallace. “That will trigger other care-related questions in the EHR. When you can put that kind of information in front of doctors, they’ll start saying, ‘Huh? I’ve always been able to be the Lone Ranger, maybe it does make sense to talk to folks riding the range.’”

Surgery

Intermountain’s chief of surgery, Mark Ott, gets reports on surgical infection rates every six months and is using that data to reduce infection rates in operating rooms. When the data showed that surgical infection rates at the flagship hospital, Intermountain Medical Center, were in line with national norms, he presented the findings to the surgeons there. He said, “You think you’re great, but compared to other hospitals in the country, you’re not above average.”

Intermountain uses a collaborative process to encourage behavioral change. Regarding infections, a committee of clinicians spent a year developing a list of 30 possible causes, then whittled it down to five and made recommendations of changes that would address them. Ott sent out a note announcing the five recommendations, and got, he says, “a bunch of people complaining — the usual thing.” In particular, they hated having to give up bringing personal items into the operating room, including fleece jackets they would wear to keep warm. “They literally hated that,” Ott says. “I would get calls all the time about how stupid that is.” Ott himself had to quit wearing his Boston Red Sox cap and instead cover his hair with disposable surgical caps. The doctors argued that there was no hard evidence that the recommendations would actually help. Ott agreed, but told them that in six to nine months he would have data — and if it didn’t show results, they could go back to the old ways.

In fact, infection rates fell to half the national standard. When the doctors got the data, they were delighted. But they also asked to relax the rules against personal items in the OR. Ott held firm, saying that since it was not clear how much each of the five factors worked, they needed to keep doing them all.

Ott also explained how data is being used to change the way Intermountain surgeons approach postoperative care following gall bladder removals. Each year, Intermountain performs thousands of gall bladder removals. In 90% of the cases patients receive postoperative antibiotics whether or not they have an infection. Ott believed that this standard practice of administering antibiotics was unnecessary. He asked for a data analysis on the use of antibiotics after gall bladder removal. While antibiotics aren’t expensive, they still cost something. 

And if a patient has an allergic reaction to the antibiotic, or develops a drug-resistant C. difficile infection leading to colitis, treatment gets pricey. Ott found that the use rate varied across the system; most hospitals used antibiotics at a near 100% rate, while ambulatory care facilities, usually staffed by the same doctors, did not prescribe them at all for the same gall bladder removal surgery. Same doctors, same operation, just a different building. “Why is that?” Ott asks. He says the data from the different venues show no difference in patient results, so he’s encouraging surgeons to rethink prescribing antibiotics.

Ear, Nose, and Throat

In 2014, Intermountain began applying data analytics to ear, nose, and throat surgeries, a subspecialty service within the surgical services program. Wallace had observed that surgeons used four different methods to cauterize, or seal off, bleeding during tonsillectomies. Each method differs significantly in price. Wallace says the question became: Was one method better than the others in limiting bleeding and improving how patients fared? The data showed that there were essentially no differences in complications, length of stay, or hospital readmissions, says Wallace.

In fact, the oldest (and cheapest) method, electrocauterization, held a slight, though insignificant, statistical edge. When the data was presented to the surgeons, they did not exactly embrace the findings. “They said, ‘that’s all well and good, but — ’ especially for those that use the more expensive new stuff, ‘ — I think my patients do better, they feel better after surgery,’” Wallace says. So a follow-up survey is underway, to collect more data on patient recovery issues. In most cases, there are no dictators in the Intermountain process, says Ott. “We don’t tell the surgeons what to use. We say, ‘Here’s the data. You can use what you want.’”

Care Process Models

Intermountain’s doctors and nurses use dozens of different data-based decision-support tools (also known as care process models) to help them care for patients. In the cardiovascular unit, for instance, a tool runs every morning at 9:15 in all 22 of Intermountain’s hospitals, pulling readings from patients’ vital signs. It sends an email alert telling clinicians which patients are at risk of heart failure, including assessments of their likelihood of being readmitted to the hospital once released, or of dying. That helps Intermountain adapt its care pathways and the way it handles patient care, accelerating the education process for these patients. It might mean assigning patients to palliative care or a hospice.


Brent Wallace, chief medical officer, Intermountain Healthcare; Colleen Roberts, director of operations, cardiovascular clinical care, Intermountain Healthcare; Lee Pierce, chief data officer, Intermountain Healthcare;

These tools help track things humans might miss, says Kim Henrichsen, Intermountain’s chief nursing officer. She says that over time, a patient’s vital signs can shift subtly, and tools built into the system analyze that data and will automatically send alerts to nurses to monitor patients or check specific vital signs. The algorithms also flag patients who appear to be at high risk for readmission based on previous data patterns, and may lead to Intermountain assigning home care to help reduce the likelihood of readmission. Over time, the hospital has also developed monitoring tools for patients who have a single episode, like hip surgery, versus those with a chronic condition, like chronic heart failure.

Patients who have suffered heart failure are put on up to 14 different drugs, from aspirin to beta-blockers, after their release. Because of the number of medications, Intermountain developed a tool to automatically create the list of medications heart failure patients need. CMO Wallace says this lets clinicians spend their mental energy focusing on what is unique about the patient.

Supply Chain

Industry analysts predict that supply costs will exceed hospitals’ top expense — labor — by 2020. The challenge, they say, is that a lack of price transparency and no system for sharing cost information leaves doctors unaware of their supply costs or how to reduce them by requesting equally effective but less expensive alternatives.8

At Intermountain, applying analytics to this challenge started in earnest in 2005, when the company started a supply chain organization. With 12,000 vendors, $1.3 billion in non-labor expenses, and a culture that ceded much purchasing authority to doctors, the supply chain managers had their work cut out for them. Perhaps the most significant challenge was finding a way to reduce expenses for physician preference items (PPIs). These are the devices or supplies that doctors request because they prefer them to comparable products. PPI suppliers worked hard to develop relationships with doctors to create physician loyalty to their products. But PPIs could consume as much as 40% of a hospital’s supply budget — and one study found nearly $5 billion in annual losses in the health care industry due to PPI-driven waste in the supply chains.

In 2014, Intermountain launched Intermountain ProComp, a system designed to reduce costs by tracking its 50 highest-volume procedures and presenting information to surgeons on their supply options in real time.

Launching ProComp has led to significant cost reductions. Ott’s data team dug through about a dozen different systems to figure out what various supplies cost. One thing they found was that some coronary surgeons used sutures that cost $750, while others used sutures that cost $250. The analytics revealed no appreciable difference in patient outcomes. Ott presented the data to the surgeons. “They were fascinated by that,” Ott says. “They had no idea that the things they were using cost so much.” Most of them stopped using the more expensive sutures.

Sometimes, though, Ott had to attack the problem from the supplier side. In bowel surgeries, Intermountain surgeons use two kinds of end-to-end anastomotic staplers. One type of stapler cost $270, the other, $870. Doctors prefer the more expensive one; two-thirds of the surgeons use it, in fact. Ott says, “I’ve used them both. I don’t really think there’s a difference. But when I talk to my surgeons, they are adamant that the more expensive product is clearly better.”

They felt that way even after Ott showed them data that found the two staplers were equivalent. Surgeons said patients’ bowels leaked more after they used the cheaper stapler, which meant patients would get sick and need another operation. Or they said that it led to more bleeding after the operation.

Ott turned to his data analytics team, who pulled 170 cases from one of Intermountain’s hospitals and combined it with data from the American College of Surgeons’ National Surgical Quality Improvement Program. The data showed that leak rates for the two staplers were the same, at about 5%, and the only major bleeding event involved the more expensive stapler.

Ott went back to the surgeons, who acknowledged the data but still wanted to use the expensive stapler. Ott didn’t force them to quit using it. Instead, he showed his data to the supplier. “I said, ‘either you lower your price to the competitor price, or we’re taking you off the shelf.’ And they immediately lowered their price.” That one minor change saved Intermountain $235,000 a year. In its first year, ProComp cut $25 million from operating costs in its Surgical Services Clinical Program alone. It aims to cut costs by $400 million by 2018.

CHAPTER 3

A New Record System

According to the Organisation for Economic Co-operation and Development, the United States spends, on a per capita basis, more than twice the average spent by 34 industrialized nations on health care ($8,745 in 2012 compared to an average of $3,484), but gets health results towards the bottom of the pack.9 Critics have fastened on the U.S. fee structure as a big part of the problem, arguing that the system is built around paying for visits, tests, and procedures, many of which are unnecessary, some even harmful. This provides an incentive for providers to focus on quantity of services over quality of care. Intermountain has seen this in action for more than 20 years. It believes its use of data has improved quality and therefore saved lives — more than 1,000 to date. But for all the benefits the data-centered care brings, it has been a struggle sometimes to pay for it. That change in elective inductions? It was a huge success for patients, but actually meant a revenue loss. In a value-based model, Intermountain would have been rewarded for the better health outcomes.


Brent Wallace, chief medical officer, Intermountain Healthcare

That is a reason why Intermountain is eager to move to a value-based business model where it gets paid for effectively caring for patients. In a value-based model, insurers will reward health care providers that lower costs by sharing the cost savings. The importance of value-based care to the future of U.S. health care is reflected in the U.S. Department of Health and Human Services’ recent announcement that it will tie half of all Medicare provider payments to value-based models by the end of 2018. CDO Pierce knows the effective use of data will be central to making the shift to a new way of doing business.

Intermountain’s new EHR is expected to play a pivotal role in helping the organization make this shift.

iCentra

When selecting its new EHR, Intermountain placed its interest in value-based health care at the forefront of its decision making. It selected Cerner, a large EHR vendor based in Kansas City, Missouri. The executive team thought Cerner had the careful attention to the secondary use of data for back-end analytics in addition to an excellent clinical transaction system that could help clinicians make better patient-care decisions. Intermountain’s configuration of Cerner products is called iCentra.

Once contracts were signed, Cerner set up shop in the offices next to the Intermountain Medical Center and relocated some of its top development talent to Utah. Pierce meets regularly with his counterparts at Cerner, which includes the occasional six-hour meeting to work through deployment strategies. He’s moved his office from the headquarters building to Intermountain Medical Center to be closer to the action but also because there were twists to the deal: Intermountain wanted to retain its own data management and analytic systems, resulting in the need for increased coordination between the organizations.

In the 18 months since they started working together, the two companies have formed a close relationship. Intermountain is consulting with Cerner on a massive Pentagon contract bid, and the two companies are discussing creating new products built on Intermountain’s data management processes and its data warehouse framework.

The Rollout


Intermountain has a standard three-phase approach to all of its technology rollouts. Implementation is the first phase, and includes all of the design, build, training, and “go-live” activities. To prepare for the go-live phase, Intermountain combined a mix of supports that included classroom training, one-on-one coaching sessions, group simulations, super-user experts, physician coaches, and a telephone support help desk that now has the ability to remotely access the user’s screen to solve problems and offer guidance.

Once the system is up and running and technically stable in a given hospital — usually after three weeks — the adoption phase begins. It is typical to have large variations in practice surface only after the go-live phase, so Intermountain has a process in place to identify and standardize these variations and adjust workflow designs. Some physicians may not be using the tools as they were designed to be used, or some may be using the tools in a laborious way. The quick-order page, for example, was redesigned as a result of analytics built on data regarding the first weeks of use.

The third stage — optimization — typically occurs over a longer time frame, from four months out to many years. This stage reflects ongoing efforts to improve the effectiveness and efficiency of the system. The time frame for this stage depends on the scope of changes that need to be made to the system.

Unwelcome Delays

No system implementation is without bumps, so Pierce’s stomach didn’t exactly sink while standing there in the Las Vegas airport, talking to the physician who said that expectations around the adaption of the tools were too aggressive. Once Pierce was back in Salt Lake City, he set up a conversation with Sameer Badlani, who joined Intermountain in October 2014 as its first chief health information officer (CHIO). The CHIO role was created to reflect that caring for patients was going to expand beyond hospitals and clinics into people’s homes and communities.


Brent Wallace, chief medical officer, Intermountain Healthcare; Colleen Roberts, director of operations, cardiovascular clinical care, Intermountain Healthcare; Lee Pierce, chief data officer, Intermountain Healthcare;

“The common pushback is ‘I’m doing too much data entry, spending less time with a patient,’” Badlani told Pierce. Some of this came because they knew the old system better than the new. Some of it was that in the new system, doctors did need to spend more time inputting data, which they hadn’t really done a lot of before. And some of it was expectations. “They expect to be facile in a matter of two weeks, and it just doesn't work that way,” Badlani said.

In the old system, doctors got a piece of paper with an order on it for prescription or follow-up, signed it, and sent it over to be input by a nurse or a unit clerk. But in the new system, IT was taking five or six minutes to put in an order. “The physician is appropriately saying, ‘My day is getting longer,’” Badlani said.

Pierce grimaced. But as he talked to Badlani, he realized that the main problem was managing expectations and large-scale change management. While some doctors do adapt quickly, most have a longer, slower learning curve.

The iCentra system gives Intermountain the analytics to leverage and support change. They just needed to do a better job of explaining to doctors that yes, it might take six minutes to input data, but once it was in the system, patients were getting their next steps processed far more quickly. Errors in things like prescriptions were also dropping significantly. “When you say to a physician, ‘Look at what this does for your patient,’ that’s really powerful,” Badlani says.

After Pierce’s discussion with Badlani, he and iCentra leadership from Intermountain and Cerner began looking at the iCentra analytics on how physicians were using the system, how much time they spent on documentation, and on order entry, looking for clues to areas where usability needed to be improved.

Pierce also found out that Intermountain had inadvertently run a test in the rollout. All the groups got the same basic training, but some groups went out and organized practice sessions on their own time. These groups as a rule handled the rollout much more effectively. So Badlani, along with the lead iCentra physician executive Mark Briesacher and the rest of the rollout team, started to develop a prescribed training methodology involving follow-up coaching sessions for the physicians after the initial classroom training and unit-based practice.

Badlani and Briesacher are focused on supporting the physicians and clinicians through this massive change. They are working with Pierce to use data in this process. “Our nurses and our doctors are believers,” Pierce says. “They’re seeking far more data, and they’re seeking far more opportunities to have the analytics, to prove better ways of providing care and lowering costs.”
Next Steps

Intermountain is not waiting for the industry to fully embrace value-based health care. In 2016, it will dive right in by launching a new insurance product that will make physicians and Intermountain jointly responsible for health care costs. Doctors who reduce costs will earn more income. Wallace thinks this will make them even more focused on data. “If there’s a surgeon in a group who’s not following that care process model, are going to look at that surgeon and say, ‘You start to follow the care process model, or you’re out.’ That’s a peer pressure model that can work well in some circumstances,” says Wallace.

Wallace cautions that Intermountain will not force this on people, but that if they don’t adopt the process models, doctors won’t be able to participate in the shared-risk system that is coming to Intermountain. He thinks that by 2018 this will represent between 50–80% of how all Intermountain Healthcare billing happens.

There will be cultural challenges that emerge from the use of the new system — beyond just getting doctors and nurses to adopt it. Primary care doctors who need to refer patients to specialists will be able to see rankings of these specialists based on internal data and make decisions accordingly. “I’m going to be able to look at what their clinical outcomes are, what their costs are, what their patient satisfaction is,” Wallace says. “That’s going to be totally transparent among the group. Our goal is to make that ultimately publically transparent.”

Pierce knows that transparency will put even more pressure on data quality. The specialist ranking project is set to launch in mid-2015, right around when Pierce will be evaluating how the most recent phase of rollouts of iCentra has gone. Looking at the iCentra launch should provide more data for analyzing ways to improve, so that each rollout in the future will go even more smoothly.

COMMENTARY

‘No Pain, No Gain’ in the Transition to Data-Driven Health Care

Sam Ransbotham


The heart of the latest analytics initiative at Intermountain Healthcare is the implementation of a new electronic health records (EHR) system. As the case study on the implementation shows, despite Intermountain’s history of success with analytics, even the best system implementations can be difficult pills to swallow. They produce a lot of extra work for everyone, and they carry considerable risk and unexpected difficulties. For those learning new systems, the suffering as they’re in the early stages of implementation is concrete and visceral; the promised benefits can be abstract and far less certain.

Yet despite the considerable effort and potential for difficulties, many aspects of Intermountain’s new EHR implementation are notable and laudable.

As it rolled out the new EHR system, Intermountain limited the number of hospitals and clinics involved in the initial deployment — to 2 of its 22 hospitals and 24 of its 185 clinics. These numbers are small enough to keep the project scope manageable, but at the same time, large enough to create opportunities to benefit from information exchange.

Furthermore, Intermountain has positioned the new EHR system as part of an inclusive analytics initiative with everyone working together on something that, while difficult, has benefits for both patients and the organization itself. Too often, new initiatives come across with users feeling forced to use a new system by management or IT. The prior cardiovascular, endocrinology, and surgery examples each show that Intermountain uses collaborative approaches to benefiting patients rather than fiat-based mandates.

Intermountain is building on a strong foundation: Its history of prior analytics innovation helps on both the technological and organizational fronts. Technologically, it has a solid infrastructure and technical experience that help reduce project uncertainty. Even an abandoned 2012 overhaul provides a basis for Intermountain technical staff to learn from and perhaps build on. Organizationally, executives and staff have had an analytics culture for many years, with many success stories that illustrate how analytics can transform patient care.

Intermountain’s approach to the role and limitations of technology is savvy. It positions tools as helping “track things that humans might miss” and allowing clinicians to “spend their mental energy focusing on what is unique about the patient.” This mindset helps with setting realistic expectations about what technology can and cannot do, and it reduces resentment of new technology. The key with analytics is to blend the strengths of technology with the strengths of people. Neither alone is sufficient.

The organization’s embrace of transparency is particularly notable. By providing access to data and being forthcoming about its limitations, Intermountain encourages a culture that works to improve data quality. As errors or shortcomings are found, the feedback improves processes. While conceptually easy for many organizations to avow, embracing feedback is difficult to do in practice. Intermountain demonstrates the cumulative benefits that result from building what it calls “the integrity of the data” in a way that engenders a “level of trust from the doctors.”

Intermountain shows signs of analytical maturity throughout the case study. We see senior-level leadership on analytics, a high value being placed on data in people’s day-to-day work, and a widespread analytics culture — all of which are associated with analytical maturity.

An important feature of analytical maturity is that organizations embed analytics in processes rather than simply regard analytics as a set of beneficial, but ad hoc, projects. A process approach is clear in the feedback loops for data quality, common languages for data “across departments and hospitals,” and structured processes for analytical decision making (e.g., operating room clothing). Analytics teams have defined roles (data analysts, data managers, and data architects); equally important, people have career paths and opportunities to progress along within the organization.

When all factors are taken together — an analytics history, savvy blending of technology and people, transparency about data sources and quality, incremental implementation, non-adversarial culture, analytical maturity, and a process focus — Intermountain has created an enviable set of achievements around data that bodes well for its future.

But of course, no analytics initiative will be completely smooth, particularly when it involves new computer systems.

Since Intermountain’s new EHR system replaces another, there is a real danger of “second system effect.” With the first system, people are often just happy to get it to work. But a replacement system must do more than the first (otherwise, why replace?), and those building it tend to try to accomplish everything that was left out of the first system and to correct all of the earlier shortcomings. As a result, second systems can be what Frederick P. Brooks, Jr. calls “the most dangerous system.”

Hearing that doctors were expecting to be “facile in a matter of two weeks” must have been insanely frustrating to the project team. Despite what the case study calls a “standard three-phase approach” to training that included “classroom training, one-on-one coaching sessions, group simulations, super user experts, physician coaches, and a telephone support help desk,” my guess is that Intermountain’s chief data officer Lee Pierce was ready to yank his hair out in the Las Vegas airport meeting with a physician who had heard “we would be up and running and as efficient as before in just a couple of weeks.” Where did expectations go astray?

Analytics initiatives bring challenges that differ depending on the organization’s analytical maturity. Beginners struggle to get basic infrastructure and processes established and to get the first, crucial successes that are needed to showcase the system’s value (and provide a foundation for continued building). Advanced organizations, on the other hand, may find they must undertake more complex systems or pervasive changes to continue to extract value from data. As a relatively advanced analytical organization, Intermountain’s basic opportunities for value from analytics may have already been utilized.

With increasing complexity comes increased difficulty of showing value from data, and from the case study we see that that is clearly true of Intermountain’s new EHR system. Drawing conclusions from data is rarely straightforward, particularly in contexts as complex as health care. For example, it is wonderful to use the opportunity to collect follow up data if an orthopedics patient later “hits our system, wherever it is.” A holistic overview is a great benefit of an EHR. However, what about patients who don’t hit the system again? It will be important to consider this source of potential bias in reaching conclusions about the strengths or shortcomings of the initial orthopedic treatment.

Or, in another example, consider the inadvertent test that took place during rollout. This is far from a true randomized test. There is something fundamentally different about a group of people who organize their own practice sessions than groups that do not; causal conclusions will require analysis of the subsequent follow-up coaching sessions, among other considerations. Difficult questions like these are complex to analyze, but simultaneously provide opportunity for analytically mature organizations to derive value from analytics. Data from the new EHR system will support this complex analysis analysis, but gathering the data is only the first of many steps.

Across the United States, both patients and physicians express legitimate concerns about EHR systems. Automation and the data that come with it are not free. Getting these platforms in place is costly; they impose a considerable “time tax” on people throughout the system. Physicians will spend more time using a system than writing a note by hand. Nurses will spend more time documenting.

This is true of most, if not all, changes. When organizations replaced typing pools with distributed word processing, managers spent more time typing than they had before. Yet going back to typing pools seems absurd now. I expect many of the changes induced by EHR will seem similarly absurd to return to. And the first generations of these systems will be clunkier than later generations. Unfortunately, these clunkier steps are largely inevitable on the path toward benefits from EHR systems and analytics.

Intermountain provides a nice example of many successes from embracing analytics, both historically and with their current initiatives. But even with this rich history, it will continue to have to work through many issues when it comes to deriving value from data. From that perspective, Intermountain's story should be a cautionary tale for those looking to emulate it. Less analytically mature organizations will find it tough to have only the difficulties Intermountain has had and can realistically expect more.

Despite the effort required, organizations everywhere (in health care and beyond) need to improve their ability to build value through data.





View more Views


 Video Lee Pierce on his personal encounter with data quality

Video Lee Pierce, Colleen Roberts and Brent Wallace on developing data-driven models for healthcare



Reproduced from MIT Sloan Management Review

Design Thinking Students Co-designing Their Own School for the Sake of Empathy 07-01

$
0
0

Design Thinking Students Co-designing Their Own School for the Sake of Empathy




The names of the children mentioned in this article have been changed to protect their privacy.
In order to inspire, motivate, and engage students in the learning experience, it is necessary to look at the world through their eyes. When adults pause and consider the world through from the students' perspective, we begin to examine their authentic needs, instead of the needs imposed upon them. Educational models that value and incorporate student input are emerging as empathic to students' needs.
In May 2014, students at The Connect Group School in Los Angeles, CA, began using design thinking to fashion an educational model that meets their needs, gathering regularly throughout the summer to develop their school from the ground up. On August 25, The Connect Group School will pilot the program their students are designing.
With a foundation of freedom and democracy drawn from the Sudbury Valley School model, the students participate in every aspect of school management. Students are involved in hiring, admissions, policy, and budgetary decisions. Like a true democracy, each participant will have a vote in the matters that impact their academic lives..
Design thinking is a dynamic pedagogy for co-learning that cultivates empathy. It is a multidisciplinary approach to solving human-centered problems and an empowering way of addressing needs and concerns. The modes of design thinking promote inquiry, iteration, and prototyping along with critical thinking, communication, teamwork, and making/tinkering. Empathy is highlighted as a mode of its own, wherein the design-thinker attempts to infiltrate and truly come to understand the needs of the end-user..
The design thinking process begins with discovery, moves to ideation and rapid prototyping, and ends with testing and execution. As an evolving process of learning, sharing, dialoguing, and problem solving, design thinking inspires adults and students to learn together. Without a lesson plan as a guide, neither teacher nor student knows where the process will lead. This bi-directional educational process is one I have termed "co-learning"--a human interaction that leads to learning for all parties. These are critical components of learning, because we all start in a place of not knowing, and can all learn something new about old topics.
Design thinking facilitates empathy by encouraging the designer to turn his or her attention towards another person, particularly the end-user being served by the process. In K-12 classrooms, design thinking allows students and staff to engage in challenges and activities that hone their ability to turn attention toward another person, with the intention of learning about needs and experiences outside their own. Design thinking cultivates empathy by training students in attentional skills and by emphasizing the need to help solve problems affecting other people.
By promoting students to a level equivalent to their teacher counterparts, design thinking challenges offer students power and voice that are very motivating. It's a methodology that respects their input in solving challenges and problems that matter.
Students Speak:
Sixteen-year-old David, one of three student co-founders/co-designers of the Connect Group School, said the following about using design thinking to build his school: "I am engaged and motivated to design think because I get to have choices and participate in solving problems that matter."
Allowing students a hand in designing their education helps them feel like active and valued participants. It's a qualitatively different experience than showing up to school on the first day and passively receiving instruction. The quality of difference lies in empathy. Students of The Connect Group School have choices and power in regards to what happens in their school. Their viewpoints, values, and perspectives matter in all areas of school that they wish to engage in.
David continued, "Design thinking gives you experience empathizing with an end-user, not just yourself."
Another student who is contributing to the development of The Connect Group School is Ryan, a 15-year-old male. Ryan said: "I am engaged and motivated to design- think because it's a method of carrying out whatever I want and need."
Ryan's comment highlights the level of empowerment, freedom, and agency that design thinking offers students. Everyone, regardless of age, wants to feel like they can contribute something meaningful, solve their own problems, and meet their own needs. These are aspects of self-agency that need to be taught to youngsters as evidence that we value them as free and capable people. Ultimately, including young people in every aspect of school design and management conveys to them that they matter. At the heart of empathy lies the value that someone matters and is worth our time to understand, from their perspective. Children deserve to be included in this homage of respect.
Twelve-year-old Jill enjoys design thinking because it allows her to move about. She talked about moving around during design thinking challenges and also about the field trips she has taken in the community to learn more about education. Empathic education recognizes children's deep need for free play as a method of learning. Incidentally, design thinking often feels like play time, because it is messy, unpredictable, and rapidly changing.
In the process of design thinking their new school, the students at The Connect Group School participate in the creation of a learning model that is relevant to them. It inspires them to work in the summer months to consider their wants and needs and those of other K-12 students.
This is how empathy as a skill is cultivated through design thinking in the K-12 arena. Planning one's own educational framework is a wonderful empathy-building tool. It stimulates students' engagement and intrinsic motivation while encouraging them to practice self-empathy in service of empathy for others.





Viewing all 1643 articles
Browse latest View live




Latest Images