Attn! Always use a VPN when RSSing!
Your IP adress is . Country:
Your ISP blocks content and issues fines based on your location. Hide your IP address with a VPN!
Bonus: No download restrictions, fines or annoying ads with any VPN Purchased!
Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Best content from the best source handpicked by Shyam. The source include The Harvard University, MIT, Mckinsey & Co, Wharton, Stanford,and other top educational institutions. domains include Cybersecurity, Machine learning, Deep Learning, Bigdata, Education, Information Technology, Management, others.
    0 0




    Full automation has proven that humans are still better than robots at some tasks. 



    Tech companies have long been valued by investors for their ability to replace employees with technology. Now, alongside software and server farms, they are moving at a breakneck pace on find living, breathing human beings to staff their systems.

    They’re doing so because of a high-profile series of failures of automation, which have prompted a wave of intense pressure from investors, the public, and governments.

    Tesla’s highly automated production line failed to produce cars at the rate CEO Elon Musk promised, prompting questions about the electric-car maker’s solvency. Systems at Google’s YouTube failed to flag extremist and exploitative videos. Russian operatives have worked to influence elections using Facebook, whose systems separately created categories of users with labels such as “Jew hater” that it then allowed advertisers to target.

    While companies such as Google and Facebook still insist that they’re just distribution platforms rather than content creators and bear limited, if any, responsibility for most of the content they host, they’re increasingly acknowledging they need to do something to curb abuses. In the short-term at least, that approach usually involves more humans.

    “Human are underrated,” tweeted Musk, as the company struggles to ramp up production of its Model 3 sedan. Musk has blamed an overly automated production process. “We had this crazy, complex network of conveyor belts… And it was not working, so we got rid of that whole thing,” he told CBS.

    Meanwhile, Google and Facebook have been hiring thousands of people to monitor content and advertising on their platforms, amid backlash against their hosting of extremist videos and messages, videos depicting the exploitation of children, propaganda, and content created to manipulate electorates in the US and elsewhere.

    Facebook CEO Mark Zuckerberg reiterated to US legislators last week that the company planned to double its security and content moderation workers to 20,000 people by the end of the year—an investment that he acknowledged would hurt its profitability.

    YouTube CEO Susan Wojcicki in December said the Google-owned video site aimed to have 10,000 people working to find and combat content that violates its policies, a 25% increase according to BuzzFeed.

    Artificial-intelligence experts say Zuckerberg and other tech executives are over-optimistic about the timeline for computers identifying things such as toxic speech, and point to existing systems that fail at that task. A new Barclays research report says that humans are better than robots at “sensorimotor skills” and “cognitive functionality,” meaning humans are less clumsy than robots and are better at making decisions factoring in context and in cases where there’s incomplete information. There are reasons to be confident that humans will retain some of those advantages for decades into the future.

    But any surge in hiring by tech companies is unlikely to significantly offset the toll on employment from the current wave of automation. And the jobs that such companies are hiring for at scale—such as people to watch videos for offensive content—tend to require lower skills, and pay lower wages.





    0 0




    Linnea Olson tells her story—of repeatedly facing death, then being saved by the latest precision therapy—articulately and thoughtfully, agreeing to discuss subjects that might otherwise be too personal, she says, because it could benefit other patients. She lives in an artist cooperative in Lowell, Massachusetts, in an industrial space, together with her possessions and artwork, which fill most of an expansive high-ceilinged room. Olson is tall, with close-cropped, wavy blonde hair, and dresses casually in faded blue jeans. Although she has an open, informal style, this is paired with a natural dignity and a deliberate manner of speaking.
    “I had a young doctor who was very good,” she begins. “I presented with shortness of breath and a cough, and also some strange weakness in my upper body. And he ordered a chest x-ray.” Years later, she saw in her chart that he had written, “On the off chance that this young, non-smoking woman has a neoplasm”—the beginnings of a tumor in her left lung. But he didn’t mention that to her, and “he ended up getting killed on 9/11—he was on one of the planes that hit the towers.”
    The national tragedy thus rippled into Olson’s life. Never suspecting that her symptoms could be caused by cancer, she spent the next several years seeking a diagnosis. A string of local doctors told her it was adult-onset asthma, hypochondria, then pneumonia. When antibiotics didn’t clear the pneumonia, a CT scan showed a five-centimeter mass in her left lung: an infection? Or cancer? It was the first time she had heard that word. The technicians told her that at 45, she was too young for that. But a biopsy confirmed the diagnosis. “In 2005, when you told someone they had lung cancer,” a doctor later told her, “you were basically saying you were sorry.” Her youngest son was seven at the time. Olson wanted to live.
    Now, 13 years later, she is alive and healthy, a testament to the potential of precision medicine to extend lives. But like precision medicine itself, her story encapsulates the best and worst of what medicine can offer, as converging forces in genetics, data science, patient autonomy, health policy, and insurance reimbursement shape its future. There are miraculous therapies and potentially deadly side effects; tantalizing quests for cures that come at increasingly high costs; extraordinary advances in basic science, despite continuing challenges in linking genes implicated in disease to biological functions; inequities in patient care and clinical outcomes; and a growing involvement of patients in their own care, as they share experiences, emotions, and information with a global online community, and advocate for their own well-being.
    Precision medicine is not really new. Doctors have always wanted to deliver increasingly personalized care. The current term describes a goal of delivering the right treatment to the right patient at the right time, based on the patient’s medical history, genome sequence, and even on information, gathered from wearable devices, about lifestyle, behaviors, or environmental exposures: healthcare delivered in an empiric way. When deployed at scale, this would, for example, allow doctors to compare their patient’s symptoms to the histories of similar patients who have been successfully treated in the past. Treatments can thus be tailored to particular subpopulations of patients. To get a sense of the promise of precision medicine—tantalizingly miraculous at times, yet still far from effective implementation—the best example may be cancer, which kills more than 595,000 Americans each year.

    Patient 4

    In some cases, cancer can be driven by a small number of genes—even a single gene—that can be identified and then targeted. Even in cancers with many mutations, genetic profiling makes it possible to unambiguously distinguish between tumor cells and healthy tissues. That is a great boon in a disease that essentially hijacks the patient’s own biology. Genome sequencing, by precisely defining the boundary between self and non-self, can even enable immunotherapies that kill cancer cells but not others. Still, state-of-the-art precision cancer medicine is something like the surgical airstrikes of the 1960s: vastly better than the carpet-bombing of chemotherapy, but not without risk of collateral damage.
    In 2005, when Olson was diagnosed with lung cancer, surgery, chemotherapy, and radiation—so-called cut, poison, and burn therapies—were the frontline treatments. A friend’s husband, a surgeon, recommended that she go to Massachusetts General Hospital (MGH) for the lobectomy that would remove the lower lobe of her left lung. When she woke from surgery, an oncologist, Thomas Lynch, was standing at the foot of her bed. He was running a clinical trial of an experimental drug he’d helped develop, and she fit the profile of a patient who might benefit.
    Lung cancer is rare before 45, and most common after 65: the average age of patients diagnosed with the disease in the United States is 70, and the cancers themselves are typically loaded with random mutations, caused by repeated, long-term exposures to airborne toxins, as might occur after a lifetime of smoking. But Olson was young and had never smoked. This meant that her cancer was likely being caused not by many mutated genes, but by a single “driver” mutation. There are now eight well-established driver mutations for the disease. Lynch hoped that Olson would have one called EGFR (epidermal growth factor receptor), the only one then known. But she didn’t.
    Lynch explained to her that cancer outcomes traced a bell curve. At one end were those patients who did poorly. Most were in the middle. But at the other end were the outliers, those who lived a long time. “ ‘Tell me about the outliers,’ ” she recalls asking him—“almost like it was a fairy tale.” She was floundering, she says, as she faced post-surgical chemotherapy, dreading its cytotoxic effects. Lynch persuaded her not to give up. “We’re going to take you to the brink of death,” he told her, “but we’re trying to cure you.” She read Lance Armstrong’s book, It’s Not About the Bike, as she went through four rounds of treatment.“It is horrible,” she says, looking back on it. But “I’d get on my little exercise bike and say, ‘I am Lance Armstrong. I can do this.’”
    The tumor was unchanged by the chemotherapy. As months passed, Lynch referred to the growing numbers of nodules in her lungs as “schmutz”—never as cancer. He was trying to keep her hope alive.
    In 2008, her symptoms returned, and worsened. Her cancer had progressed to stage IV. In a last-ditch effort, Lynch put her on Tarceva, the targeted therapy for EGFR, anyway, “just in case the genetic test had missed something,” he later explained. But as Olson recalls, “I experienced all of the side effects and none of the benefits.” She asked him how long she had to live. “Three to five months,” he told her. “Should I get my affairs in order,” she asked? “Yes,” he said. In distress, she told a social worker to whom she had been referred, ‘I need you to help me learn how to die.’ And instead, she’s really helped me learn how to live.”
    It turned out that even though Olson didn’t have the EGFR mutation, genetic testing done when she started taking Tarceva revealed that she had a different single-driver mutation, ALK, for which a phase 1 clinical trial had just begun. Lynch asked if she wanted to participate in this effort to determine optimal dose, side effects, and efficacy. Patient 1, he told her, had appeared to respond to the therapy, but then died—in part because of it. Olson didn’t want to hasten her own death, but reasoned that doing nothing, she would soon die anyway. She signed on as Patient 4.
    Within days, she felt better. The side effects were mild. At the seven-week mark, she saw Lynch to review scans of her lungs. What had looked like a blizzard was completely gone. “I went from accepting that I was going to die, to ‘Oh my God, I’m going to live a little while longer,’” says Olson. “It was like a fairy tale.” Lynch made it very clear that this did not represent a cure, and that there was nothing after this. Eventually, he told her, there would be secondary mutations. But she’d been given another chance.
    Professor of medicine Alice Shaw, a physician-scientist at MGH who has been working on ALK and its secondary mutations for 10 years, has been Olson’s oncologist since 2009. Lung-cancer treatment has progressed substantially in the last decade, she says, so that molecular profiling of patient tumors is now standard care. Patients eligible for a targeted therapy skip chemotherapy.
    EGFR, the first targetable oncogene (a gene with the potential to cause cancer), was discovered in lung cancer in 2004. “The EGFR gene is mutated in about 10 percent to 15 percent of lung-cancer patients in this country,” Shaw says. Olson’s ALK mutation (technically, a chromosomal rearrangment) discovered in lung cancer in 2007, is present in about 5 percent of patients. There are numerous driver mutations for this disease, seven of which can be turned off with new targeted therapies, which work for about 30 percent of U.S. lung-cancer patients—many of whom can return to their normal lives because the pills are fast-acting and don’t cause as much collateral damage as chemotherapy.
    That is something that should be considered, Shaw says, when weighing the costs of targeted drugs, which run about $15,000 a month for as long as the patient is responding. “Obviously, $180,000 a year is an enormous cost. The question is, how do you weigh these costs, in light of the life-saving benefits of these drugs?” Some of the newest treatments for lung cancer, such as immunotherapies (see “The Smartest Immunologists I Know,” below) are as expensive as targeted therapies, she reports. And traditional chemotherapy often keeps patients out of work, and sometimes leads to hospitalization—costly outcomes. By contrast, targeted therapies allowed Olson to live relatively normally and raise her youngest son, now 20 and an undergraduate at MIT.

    Finding Five Unknown Variables

    Miraculous as they are at their best, targeted therapies do not work forever. That’s because genomic instability is one of the defining features of cancer. “I went a full glorious year before I started to have some progression,” Olson recalls. At that time, in 2009, when the cancer began growing again, patients knew they would soon have to leave the ongoing trial. That could have been the end for Olson. But because she had no symptoms from the early progression, and felt well, she was permitted to stay on the experimental drug for almost three years. Then a second ALK inhibitor opened in a phase 1 clinical trial. Fortunately for Olson, the drug was active against ALK S1206Y, the resistance mechanism that had developed in her cancer’s ALK gene, and it bought her 15 more months (although she suffered gastrointestinal side effects as well as liver toxicity, for which she had to be briefly hospitalized). Her therapy has carried on this way, a continuing cascade of genetic analyses as the cancer adapts, and then a new therapy, just in time to save her. The alternative—standard chemotherapy and radiation—typically extends lung cancer patients’ lives by just three to six months.
    The development of resistance is less a reflection of the efficacy of targeted therapeutics than of the cancer’s ability to evolve. Cancer cells proliferate through division, and mutate rapidly. If a single cancer cell among millions happens to be resistant to a particular therapy, that cell and its progeny eventually become dominant drivers of the patient’s disease. Shaw studies these mechanisms of resistance; once pathologists sequence tumors, the scientists can identify the mutations and develop models of them, she explains. Working with pharmaceutical companies, the researchers test newer drugs against these mutations to see if the therapies are active. Now that there are several inhibitors for EGFR and ALK mutations, Shaw says, she and her colleagues are beginning to explore combination therapies, hoping to stop the cancer before it becomes more complex in response to single-drug treatments.
    Combination therapies are critical against cancer, agrees Peter Sorger, Krayer professor of systems biology and director of Harvard Medical School’s (HMS) Laboratory of Systems Pharmacology (see “Systematic Drug Discovery,”  July-August 2013, page 54). He and his postdoctoral fellow Adam Palmer find that many combination therapies are superior to single drugs across a wide range of solid tumors because of tumor heterogeneity. Heterogeneity arises from genetic differences among cells in a single patient and among tumors in different patients; it likely explains why a particular anti-cancer drug can be effective in some patients but ineffective in others with the same type of cancer.
    In fact, a graph of patient responses traces a bell curve with a long tail: many patients respond only partially, but some do very well (they lie out on the tail). Combination therapies improve rates of success in patient populations (and clinical trials) in this view simply by increasing the odds that a patient will lie out on the tail. In other words, combination therapy overcomes ignorance of which drug will work best in a specific patient; this is true even when a targeted therapy is given to genetically selected populations.
    Such bet-hedging is a case of the glass being half full, Sorger says: “existing combinations have taken untreatable disease in which a metastatic case means you die, to one in which a quarter or more of patients are doing well. At the same time, the large impact of unknown variables is the measure of how far we have to go in cancer pharmacology.”
    How do we reconcile this statistical view of responsiveness to cancer therapy with the precise molecular experiments that Shaw and her colleagues are using to design combination therapies for cancers carrying EGFR, ALK, and other mutations? Sorger and Palmer propose that high variability in response to anti-cancer therapy arises because multiple mutations are involved—perhaps six or more in each cancer cell—many of which are unknown. “If we knew all the relevant genes determining drug response in a particular patient, we could be highly predictive, and able to tailor a therapy for each patient,” Sorger says. The studies Shaw has underway are necessary to make such prediction possible in the future. Moreover, in some cases there is evidence that combination therapies can be much more effective than the sum of their parts; there is currently no systematic way to find such combinations at the moment, but they are well worth pursuing. Both Sorger and Shaw agree that, as precision medicine improves and scientists identify the spectrum of mutations involved in drug response, it will be increasingly possible for physicians to tailor therapy to an individual patient’s needs.
    Todd Golub, professor of pediatrics and director of the cancer program at the Broad Institute of MIT and Harvard, is part of an ambitious project to find those several targetable genes—and an estimated 10,000 more like them. The aim of cancer treatment, he says, ought to be the use of molecular analysis to make predictions about what the best therapy should be for each patient, for all types of cancer—the ultimate goal of personalized, precision medicine. He and his Broad colleagues are at work on the “cancer dependency map.” Their goal is to identify all the genes that are unique to cancers, on which any cancer depends for growth—the “Achilles heels” of the disease.
    Their first challenge is to gather the broadest range of cancer-tissue samples they possibly can. Paired with this effort to collect patient information is a laboratory project to create model cancer cell lines and to test all FDA-approved drugs and drugs that are in clinical development—on the order of 5,000 compounds—against them. “You can’t do that in a patient,” notes Golub. Seeing which compounds are effective against these cancers allows researchers to identify those Achilles-heel genes. “That allows us to create a roadmap for drug developers, so that eventually, we will have a full medicine cabinet to make this concept work,” he explains. Of course there are challenges: some therapeutic targets are critical for normal cells, too. “But we are learning,” he adds, “that in some cases, [inhibiting] the function of a target 24/7 can be horribly toxic, but when therapies are used transiently, tumor cells die, and normal cells don’t.”
    The Broad effort is at the beginning stages, with just 500 cancer cell lines, heavily biased toward European ancestry. The fact that whole ethnicities are missing is a measure of how far they have to go. “We’re not going to get there in one fell swoop,” Golub explains. “We’ll get there by keeping people alive longer and longer, until eventually, it becomes a numbers game where the goal is to eradicate all the tumor cells and leave none behind that have drug resistance mechanisms that allow them to escape.” With a complete cancer dependency map, and the molecular profile of a given cancer, physicians could “identify the five drugs predicted to be effective against that tumor. We would put together combinations of drugs that don’t share common susceptibilities to resistance, and unless you had a tumor the size of Manhattan,” there would be no way for the cancer to get around that combination. “We won’t get there during my career for most patients. But for the next generation, I think it is not crazy.”
    What Golub is describing is a rational, systematic approach to building a complete arsenal of targeted drug therapies like those that have extended Linnea Olson’s life and the lives of many other patients. Instead of using them serially to extend life, though, he imagines combination therapies that would effect cures. But there is another approach that might yield results for some patients even sooner.

    “The Smartest Immunologists I Know”

    Immunotherapy is the maverick of cancer research and clinical care, a relatively new strategy in treatment with the potential to cure certain types of cancer now. Harnessing patients’ immune systems to fight cancer represents an approach radically different from that used in targeted drug therapy. There are three distinct techniques: training the immune system using personalized vaccines; reawakening immune cells by stimulating them to recognize cancers through the use of drugs; and engineering a patient’s T-cells outside the body so they will recognize cancer cells and then reinserting those T-cells in patients.
    In what may turn out to be the ultimate precision medicine, married professors of medicine Catherine Wu, an oncologist at Dana-Farber Cancer Institute (DFCI), and Nir Hacohen, director of MGH’s Center for Cancer Immunology and co-director of the Broad Institute’s Center for Cell Circuits, have together created personalized cancer vaccines that train the immune system to recognize and destroy cancer cells. In a small clinical trial, they created personalized vaccines for each of six melanoma patients, and let their immune systems do the rest.
    The process works by training T-cells, white blood cells that are the immune system’s weapons for identifying and destroying infected tissue, to recognize cancer. Instead of targeting driver mutations, as targeted therapies do, this approach teaches the immune system to recognize random mutations. As Hacohen explains, half of cancer tumors have defects in DNA repair, so tumors develop a lot of random mutations, and the mutated proteins are visible, on cell-surface receptors, to T-cells. “The fact that there is almost no overlap” in these mutations between patients, he explains, “is what makes this approach personalized.” Hacohen and Wu design the vaccines by first analyzing a patient’s immune system, then analyzing her tumor, and finally creating a vaccine that will stimulate her T-cells to bind to a set of perhaps 20 different mutated proteins on tumor-cell surfaces. The trick is to create a vaccine that mimics the mutated proteins. When injected into patients, the immune system recognizes these foreign invaders, and stimulates T-cells that proliferate, recognize, and attack those same mutated proteins on cancer cells. Normal cells, because they don’t have such mutations, are spared.
    In each case, radiology of these patients several years later shows no recurrence of disease. Hacohen is reluctant to generalize about the success rate based on such a small sample, but he does note that two other groups (one based at Washington University in St. Louis, one in Germany) have had similar success in trials of cancer vaccines.
    Because this approach targets mutations, it is ideally suited for tumors such as smoker’s lung cancer, or melanoma, in which chronic exposure to carcinogens (UV light in the case of melanoma) has driven lots of mutations, creating a genetically noisy landscape. That is because the more genetically complex a tumor is, the more likely the immune system will recognize it as a foreign invader and try to eradicate it. Hacohen’s labs focus on basic immunology, genomics, and systems biology—what he terms “biological equations” that help distinguish cancer cells from healthy ones. Combining his three fields allows him to do the whole-body analysis necessary to distinguish healthy tissue from the foreign molecules on the surface of cancer cells that the immune system can recognize. But Hacohen is a pure researcher; he doesn’t see patients. Wu, an oncologist, does and can run FDA-approved trials with DFCI oncologists to test the vaccines in patients. The combined expertise of this husband-and-wife team is necessary to complete these extremely specialized therapies.
    Because this type of therapy is not yet commercially available, the eventual market cost of creating custom vaccines is hard to estimate. At the moment, Hacohen explains, the sequencing of individual patients and their respective tumors costs about $5,000 each, but that price is dropping rapidly. Even the computation required to design a tailored vaccine is relatively limited. What does cost a great deal right now, he says, is manufacture of the resulting vaccine, largely because of all the safety mechanisms that must be satisfied before any custom therapy is deployed in a human patient. That engineering alone might cost upward of $100,000. But this price, too, could fall as personalized vaccine development becomes more widely practiced.
    A second approach involves reawakening the immune system. In the same way that cancer evolves to resist drugs, it evolves to evade the body’s natural defenses. As cancer begins in a patient, the immune system targets and kills any tumor cells it sees—but left behind to proliferate are the cancer cells that evade the immune system. Immunology researchers like Fabyan professor of comparative pathology Arlene Sharpe have therefore been working to elucidate how cancer disguises itself. Sharpe, who is interim co-chair of the microbiology and immunology department at HMS, heads the cancer immunology program at the Dana-Farber Harvard Cancer Center and co-directs the Evergrande Center for immunologic diseases at HMS and Brigham and Women’s Hospital. She has collaborated with her husband, professor of medicine Gordon Freeman, a molecular biologist and DFCI researcher, to study those pathways.
    A key mechanism for defeating cancer’s evasion of T-cell attacks is “checkpoint blockade therapy,” on which Sharpe and Freeman have done much of the basic research. This approach reawakens the immune system to the presence of tumor cells. The surface of cancer cells often display molecules that bind to the inhibitory receptors, known as checkpoints, on T-cells. This stops the T-cells from attacking and killing the tumor.
    In normal immune function, Sharpe explains, these inhibitors are critical because they are, in effect, dials that modulate the immune response, turning its sensitivity to foreign objects up or down. Autoimmune diseases such as type 1 diabetes, in which T-cells destroy the pancreas after mistaking it for a foreign invader, illustrate why these inhibitory mechanisms are so important biologically; they prevent the immune system from attacking healthy tissues. But cancer often cloaks itself in molecules that block the immune response. The result is that “the immune cycle often doesn’t work well in cancer patients,” says Sharpe. “Tumors are the smartest immunologists I know.”
    But drugs can block these inhibitors, by targeting either their receptors on T-cells or binding partners on the surface of cancer cells. Then, the immune system can suddenly “see” tumors, enabling it to target and destroy them. This T-cell awakening therapy is now being combined with other types of cancer treatment, such as targeted therapies that focus on driver mutations, but Hacohen and Wu have also used it in combination with personalized vaccines that focus on random mutations, in order to make the vaccines even more effective.
    A third type of therapy involves re-engineering the immune system by deploying chimeric antigen receptors (CARs): synthesized molecules that redirect T-cells to specific targets. CAR-T therapy, developed at the University of Pennsylvania, has proven highly effective against leukemia, a blood cancer. Assistant professor of medicine Marcela Maus, recruited from Penn, a world-renowned expert in the use of CAR-T therapies who also conducts research as director of the cellular immunotherapy program at MGH, is working to develop such therapies to kill solid tumors.
    CAR-T cells are engineered immune cells that recognize specific markers on the surface of cancer cells and attack them. The process involves removing T-cells from a patient, engineering them to target a particular type of cell, growing them in the lab, and then injecting billions of them into the patient. The upside of CAR-T therapies is the “unprecedented elimination of tumors in the majority of patients,” Hacohen explains, “with the downside of toxicity….You’re killing billions of cells in the body in weeks,” a response that dwarfs anything the immune system could stage unaided. This can lead to “cytokine storms,” as huge numbers of cancer cells die almost simultaneously and have to be flushed from patients. Experts in this technique have developed methods for controlling these storms, but the high cost of the approach—as much as $500,000 per patient—has made it the poster child for the troubling economics of modern cancer care (see “Is Precision Medicine for Everyone?”).

    Outliers No More

    Cost is just one constraint on the aim of ensuring that the best therapies reach the largest possible number of patients. Professor of medicine Deborah Schrag, chief of the division of population sciences at DFCI, makes a distinction between a therapy’s efficacy in a lab or controlled setting such as a clinical trial, and its effectiveness in the population at large. It’s the difference between how well a treatment can work and how well it actually does work given real world conditions. “If a dairy farmer from Maine can’t make it to twice daily radiation treatment in Boston because he has to milk his cows,” that changes the real-world effectiveness of the therapy. Participants in clinical trials are likely to take their medications twice a day exactly as prescribed, but in the routine care context, adherence is imperfect, and that contributes to the efficacy-effectiveness gap. (Key to tracking any intervention’s performance are electronic health records, and Schrag is among the leaders of a cancer data-science effort to develop standards for records used in cancer care; see “Toward a Personal Biomap.”) “Historians of medicine and some prominent skeptics look at the bottom line, and ask what is happening at the population level,” she explains. The reality is that for most patients, advanced lung cancer remains fatal. Leading-edge therapies such as targeted medicine have helped only a subset of the population. “Cancer medicine is the furthest ahead” in the use of genomic analysis to guide therapy, Schrag says, “but it still has a long way to go.”
    But patients like Linnea Olson are no longer outliers. Alice Shaw, her oncologist, says Olson’s appearance on an ABC World News broadcast in 2009 made other lung-cancer patients realize that they ought to be genetically tested, too. One of those patients came to MGH, was treated by Shaw, and appeared on the same show the following year, and that led to another generation of patients realizing that they might have a treatable mutation, too. “Now they help each other,” she says. “This has allowed patients to gain access to therapies that they would never have known about otherwise, because even their doctors didn’t know about them. I have this whole tree of patients connected to each other through social media.” One MGH lung cancer patient recently climbed a peak above 20,000 feet in the Himalayas, and was featured in TheNew York Times. The comments from readers suggested that he must be “an outlier.” Not so, says Shaw: she has many patients who are performing incredible feats and living for years, now that targeted therapies are available. “These patients are not the rare outliers anymore.”
    Olson is happy to have the company, but jokes that she needs to stay out front: “If I’m not, that means I’m dead,” she says, laughing. Now four years into her third targeted therapy without any apparent cancer progression, she has instead begun experiencing toxicity from the contrast agents used in the CT scans that are required every few weeks as part of clinical trials. “I figured out the other day that I have known I had cancer for 22.4 percent of my life,” and had more than 150 CT scans. “That is a huge amount. But it is very easy to put into perspective quickly. I am so lucky to have these problems, because I am alive.” Olson still allows CT scans of her lungs, to which her particular metastatic cancer is confined, but not of her abdomen. That means “I’m non-compliant” in the trial, she says. “But I’ve already donated my body to science, and I want to live. Nobody expected any patient like me to live this long.”  



    0 0


    Can money buy you happiness? 


    It’s a longstanding question that has many different answers, depending on who you ask.
    Today’s chart approaches this fundamental question from a data-driven perspective, and it provides one potential solution: money does buy some happiness, but only to a limited extent.






    Money and happiness

    First, a thinking exercise.

    Let’s say you have two hypothetical people: one of them is named Beff Jezos and he’s a billionaire, and the other is named Jill Smith and she has a more average net worth. Who do you think would be happiest if their wealth was instantly doubled?
    Beff might be happy that he’s got more in the bank, but materially his life is unlikely to change much – after all, he’s a billionaire. On the flipside, Jill also has more in the bank and is likely able to use those additional resources to provide better opportunities for her family, get out of debt, or improve her work-life balance.
    These resources translate to real changes for Jill, potentially increasing her level of satisfaction with life.
    Just like these hypotheticals, the data tells a similar story when we look at countries.

    The data-driven approach



    World Bank

    In general, this means that as a country’s wealth increases from $10k to $20k per person, it will likely slide up the happiness scale as well. For a double from $30k to $60k, the relationship still holds – but it tends to have far more variance. This variance is where things get interesting.

    Outlier regions

    Some of the most obvious outliers can be found in Latin America and the Middle East:
    In Latin America, people self-report that they are more satisfied than the trend between money and happiness would predict.
    Costa Rica stands out in particular here, with a GDP per capita of $15,400 and a 7.14 rating on the Cantril Ladder (which is a measure of happiness). Whether it’s the country’s rugged coastlines or the local culture that does the trick, Costa Rica has higher happiness ratings than the U.S., Belgium, or Germany – all countries with far higher levels of wealth.
    In the Middle East, the situation is mostly reversed. Countries like Saudi Arabia, Qatar, Iran, Iraq, Yemen, Turkey, and the U.A.E. are all on the other side of the trend line.
    Outlier countries
    Within regions, there is even plenty of variance.
    We just mentioned the Middle East as a place where the wealth-happiness continuum doesn’t seem to hold up as well as it does in other places in the world.
    Interestingly, in Qatar, which is actually the wealthiest country in the world on a per capita basis ($127k), things are even more out of whack. Qatar only scores a 6.37 on the Cantril Ladder, making it a big exception even within the context of the already-outlying Middle East. 



    Nearby Saudi Arabia, U.A.E., and Oman are all poorer than Qatar per capita, yet they are happier places. Oman rates a 6.85 on the satisfaction scale, with less than one-third the wealth per capita of Qatar.

    There are other outlier jurisdictions on the list as well: Thailand, Uzbekistan, and Pakistan are all significantly happier than the trend line (or their regional location) would project. Meanwhile, places like Hong Kong, Ireland, Singapore, and Luxembourg are less happy than wealth would predict.







    0 0



    Fluorescent-labeled cells used to train neural networks. Image: Allen Institute. 


    New 3D models of living human cells generated by machine-learning algorithms are allowing scientists to understand the structure and organization of a cell's components from simple microscope images.

    Why it matters: The tool developed by the Allen Institute for Cell Science could be used to better understand how cancer and other diseases affect cells or how a cell develops and its structure changes — important information for regenerative medicine.

    "Each cells has billions of molecules that, fortunately for us, are organized into dozens of structures and compartments that serve specialized functions that help cells operate," says Allen Institute's Graham Johnson, who helped develop the new model.

    What they did: The researchers used gene editing to label the nucleus, mitochondria and other structures inside live human induced pluripotent stem cells (iPSC) with fluorescent tags and took tens of thousands of images of the cells.

    They then used those images to train a type of neural network known as Generative Adversarial Networks (GANs). That yielded a model that can predict the most likely shape of the structures and where they are in cells based on just the cell's plasma membrane and nucleus.

    Using a different algorithm, they created a model that can take an image of a cell that hasn't been fluorescent-labeled — in which it's difficult to distinguish the cell's components ("it looks like static on an old TV set," Graham Johnson says) — and find the structures.

    What they found: When they compare the predicted image to actual labeled ones, the Allen Institute researchers said they are nearly indistinguishable.

    The advance: Gene editing and fluorescent dyes often used to study cells only allow a few components to be visualized at once and can be toxic, limiting how long researchers can observe a cell.

    Plus, "knowledge gained from more expensive techniques or ones that take a while to do and do well can be inexpensively applied to everyone’s data," says the Allen Institute's Greg Johnson, who also worked on the tool. "This provides an opportunity to democratize science."

    View at the original source

    0 0



    Qualia is a 42 ingredient 'smart drug' designed to provide users with immediate, noticeable uplift of their subjective experience within 20 minutes of taking it, as well as long-term benefits to their neurology and overall physiologic functioning.


    The Science of Nootropics

    Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. They’re a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
    Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics.

    However, there are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers. Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team including Sara Adães, who has a PhD in neuroscience and Jon Wilkins, a Harvard PhD in biophysics.

    Smart Drugs

    Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.

    The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?


    0 0





    Most transformative medicines originate in curiosity-driven science, evidence says....

    Would we be wise to prioritize “shovel-ready” science over curiosity-driven, fundamental research programs? In the long term, would that set the stage for the discovery of more medicines?
    To find solid answers to these questions, scientists at Harvard and the Novartis Institute for Biomedical Research (NIBR), publishing in Science Translational Medicine, looked deep into the discovery of drugs and showed that, in fact, fundamental research is “the best route to the generation of powerful new medicines.”

    “The discoveries that lead to the creation of a new medicine do not usually originate in an experiment that sets out to make a drug. Rather, they have their origins in a study — or many studies — that seek to understand a biological or chemical process,” said Mark Fishman, one of three authors of the study. “And often many years pass, and much scientific evidence accumulates, before someone realizes that maybe this work holds relevance to a medical therapy. Only in hindsight does it seem obvious.”

    Fishman is a professor in the Harvard Department of Stem Cell and Regenerative Biology, a faculty member of the Harvard Stem Cell Institute, and former president of NIBR. He is a consultant for Novartis and MPM Capital, and is on the board of directors of Semma Therapeutics and the scientific advisory board of Tenaya Therapeutics.

    CRISPR-cas9 is a good example of discovery biology that opened new opportunities in therapeutics. It started as a study of how bacteria resist infection by viruses. Scientists figured out how the tools that bacteria use to cut the DNA of an invading virus could be used to edit the human genome, and possibly to target genetic diseases directly.

    The origins of CRISPR-Cas9 were not utilitarian, but those discoveries have the potential to open a new field of genomic medicine.

    Blood pressure medicines would never have been created without the discovery of the role of renin (a renal extract) in regulating blood pressure in 1898.

    Blood pressure medication is another example of how fundamental discoveries can lead to transformative medicines.

    People who suffer from high blood pressure often take drugs that act by blocking the angiotensin-converting enzyme. Those medicines would never have been created without the discovery of the role of renin (a renal extract) in regulating blood pressure in 1898, or without the discovery of angiotensin in 1939, or without the solid understanding of how the enzyme works, shown in 1956.

    This work was not tied earlier to making pills for hypertension, mainly because hypertension was generally believed to be harmless until the 1950s, when studies showed its relationship to heart disease. Before then, the control of blood pressure was itself a fundamental science, beginning with Stephen Hales’ measurement  of blood pressure in a horse in 1733.

    The discovery of ACE inhibitors really reflects the convergence of two fields of fundamental, curiosity-driven discovery.

    Yet some observers believe that projects that can demonstrate up front that they could produce something useful should take priority over projects that explore fundamental questions. Would there be many more medicines if academics focused more on programs with practical outcomes? How would that shift affect people in the future?

    To find answers, Fishman and his colleagues investigated the many scientific and historical paths that have led to new drugs. The study they produced is a contemporary look at the evidence linking basic research to new medicines.

    The authors used a list of the 28 drugs defined by other scientists as the “most transformative” medicines in the United States between 1985 and 2009. The group examined:
    Whether the drug’s discovery began with an observation about the roots of disease;
    Whether the biologist believed that it would be relevant to making a new medicine; and
    How long it took to realize that.

    To mitigate bias, the researchers repeatedly corroborated the assignment with outside experts.
    They found that eight out of 10 of the medicines on their list led back to a fundamental discovery — or series of discoveries — without a clear path to a new drug.

    The average time from discovery to new drug approval was 30 years, the majority of which was usually spent in academia, before pharmaceutical or biotechnology companies started the relevant drug development programs.

    Fishman concluded, “We cannot predict which fundamental discovery will lead to a new drug. But I would say, from this work and my experiences both as a drug discoverer and a fundamental scientist, that the foundation for the next wave of great drugs is being set today by scientists driven by curiosity about the workings of nature.”

    What industry and academic leaders say..

    Leaders in biomedicine from industry, business, and academia warmly welcome this new body of evidence, as it supports the case for funding curiosity-driven, non-directed, fundamental research into the workings of life.

    “This perspective on drug discovery reminds all of us that while many in both industry and academia have been advocating for a more rational approach to R&D, the scientific substrate we depend on results from a less than orderly process. The impact of basic research and sound science is often unpredictable and underestimated. With several telling examples, the authors illustrate how they can have a ripple effect through our field.”

    – Jean-François Formela, M.D., Partner, Atlas Venture...

    “The paper presents a compelling argument for investing in fundamental, curiosity-driven science. If it often takes decades to recognize when a new discovery should prompt a search for targeted therapeutics, we should continue to incentivize academic scientists to follow their nose and not their wallets.”

    – George Daley, M.D., Ph.D., Dean of the Faculty of Medicine, Caroline Shields Walker Professor of Medicine, and Professor of Biological Chemistry and Molecular Pharmacology at Harvard Medical School

    “There is a famous story of a drunk looking for his lost keys under a streetlight because the light is better there. As Mark reminds us, if we only look for cures where the light has already shone, we will make few if any new discoveries. Basic research shines a light into the dark corners of our understanding, and by that light we can find wonderful new things.”

    — Dr Laurie Glimcher, M.D., President and CEO of the Dana-Farber Cancer Institute and Richard and Susan Smith Professor of Medicine at Harvard Medical School

    “The importance of fundamental discovery to advances in medicine has long been a central tenet of academic medicine, and it is wonderful to see that tenet supported by this historical analysis. For those of us committed to supporting this pipeline, it is a critical reminder that young scientists must be supported to pursue out-of-the-box questions and even new fields. In the end, that is one of the key social goods that a research university provides to future generations.”

    — Katrina Armstrong, M.D., M.S.C.E., Physician-in-Chief, Department of Medicine, Massachusetts General Hospital

    “Human genetics is powering important advances in translational medicine, opening new doors to treatments for both common and rare diseases at an increasingly rapid pace. Yet, these discoveries still require fundamental, basic scientific understanding into the drug targets’ mechanism of action. In this way, the potential of the science can be unlocked through a combination of curiosity, agility, and cross-functional collaboration to pursue novel therapeutic modalities like gene and cellular therapies, living biologics, and devices. This paper illustrates the value of following the science with an emphasis on practical outcomes and is highly relevant in today’s competitive biopharmaceutical environment, where much of the low-hanging fruit has already been harvested.”

    – Andy Plump, M.D., Ph.D., Chief Medical and Scientific Officer, Takeda Pharmaceutical Co.
    “Medicine depends on scientists asking questions, collectively and over generations, about how nature works. The evidence provided by Fishman and colleagues supports an already strong argument for continued and expanded funding of our nation’s primary source of fundamental science: the NIH and the NSF.”

    – Douglas Melton, Ph.D., Xander University Professor at Harvard, Investigator of the Howard Hughes Medical Institute, and co-director of the Harvard Stem Cell Institute

    “Just as we cannot translate a language we do not understand, translational medicine cannot exist without fundamental insights to be converted into effective therapies. In their excellent review, Fishman and his colleagues bring the factual evidence needed to enrich the current debate about the optimal use of public funding of biomedical research. The product of public research funding should be primarily fundamental knowledge. The product of industrial R&D should be primarily transformative products based on this knowledge.”

    — Elias Zerhouni, M.D., President Global R&D Sanofi, former Director of the National Institutes of Health, 2002-2008

    “Fundamental research is the driver of scientific knowledge. This paper demonstrates that fundamental research led to most of the transformative medicines approved by the FDA between 1985 and 2009. Because many genes and genetic pathways are evolutionarily conserved, discoveries made from studies of organisms that are highly tractable experimentally, such as yeasts, worms, and flies, have often led to and been integrated with findings from studies of more complex organisms to reveal the bases of human disease and identify novel therapeutic targets.”

    – H. Robert Horvitz, Nobel Laureate; David H. Koch Professor, Member of the McGovern Institute for Brain Research and of the David H. Koch Institute for Integrative Cancer Research, and Howard Hughes Medical Institute Investigator at Massachusetts Institute of Technology

    “This meticulous and important study of the origin of today’s most successful drugs finds convincingly that the path to discovery lies through untargeted fundamental research. The authors’ clear analysis is an effective counter to today’s restless investors, academic leaders, and philanthropists, whose impatience with academic discovery has itself become an impediment to the conquest of disease.”

    — Marc Kirschner, John Franklin Enders University Professor, Department of Systems Biology, Harvard Medical School

    “Some ask if there is a Return on Investment (ROI) in basic biomedical research. With transformative therapies as the ‘R,’ this work traces the path back to the starting ‘I,’ and repeatedly turns up untargeted academic discoveries — not infrequently, two or more that are unrelated to each other. Conclusion? A nation that wants the ‘R’ to keep coming must maintain, or better, step up the ‘I’: that is, funding for curiosity-driven, basic research.”

    View at the original source

    0 0








    Despite its promise, a lack of spatial-temporal context is one of the challenges to making the most of single-cell analysis techniques. For example, information on the location of cells is particularly important when looking at how a common form of early-stage breast cancer, called ductal carcinoma in situ (DCIS) progresses to a more invasive form, called invasive ductal carcinoma (IDC). “Exactly how DCIS invasion occurs genomically remains poorly understood,” said Nicholas Navin, Ph.D., associate professor of Genetics at the University of Texas MD Anderson Cancer Center. Navin is a pioneer in the field, developing one of the first methods for scDNA-seq.

    Cellular spatial data is critical for knowing whether tumor cells are DCIS or IDC. So, Navin developed topographical single-cell sequencing (TSCS). Navin and a team of researchers published their findings in February 2018 in Cell. “What we found was that, within the ducts, mutations had already occurred and had generated multiple clones and those clones migrated into the invasive areas,” Navin said.

    Navin and his colleagues are also using single-cell techniques to study how triple-negative breast cancer, becomes resistant to the standard from of treatment for the disease, neo-adjuvant chemotherapy. In that work, published in an April 2018 online issue of Cell, using scDNA-seq and scRNAseq, Navin and his colleagues found responses to chemotherapy were pre-existing, thus adaptively selected. However, the expression of resistant genes was acquired by subsequent reprogramming as a result of chemotherapy. “Our data raise the possibility of therapeutic strategies to overcome chemoresistance by targeting pathways identified in this study,” Navin said.
    Revealing Complexity.

    The authors of research published in 2017 in Genome Biology also identified lineage tracing as one of the technologies that will “likely have wide-ranging applications in mapping developmental and disease-progression trajectories.” In March researchers published an online study in Nature in which they combined single-cell analysis with a lineage tracing technique, called GESTALT (genome editing of synthetic target arrays for lineage tracing), to define cell type and location in the juvenile zebrafish brain.

    The combined technique, called scGESTALT, uses CRISPR-Cas9 to perform the lineage tracing and single-cell RNA sequencing to extract the lineage records. Cas9-induced mutations accumulate in a CRISPR barcode incorporated into an animal’s genome. These mutations are passed onto daughter cells and their progenies over several generations and can be read via sequencing. This information has allowed researchers to build lineage trees. Using single-cell analysis, the team could then determine the diversity of cell types and their lineage relationships. Collectively, this work provided a snapshot of how cells and cell types diverge in lineages as the brain develops. “Single-cell analysis is providing us with a lot of information about small differences at cell type-specific levels, information that is missed when looking at the tissue-wide level,” said Bushra Raj, Ph.D., a postdoctoral fellow in Alex Schier’s lab at Harvard University and first author on the paper.

    Raj’s collaborators included University of Washington’s Jay Shendure, Ph.D., and Harvard Medical School’s Allon Klein, Ph.D., pioneers in the field of single-cell analysis. The team sequenced 60,000 cells from the entire zebrafish brain across multiple animals. The researchers identified more than 100 cell types in the juvenile brain, including several neuronal types and subtypes in distinct regions, and dozens of marker genes. “What was unknown was the genetic markers for many of these cell types,” Raj explained. “This work is a stepping stone,” she added. “It’s easy to see how we might one day compare normal gene–expression maps of the brain and other organs to help characterize changes that occur in congenital disease or cancer.”

    Raj credits single-cell analysis with accelerating the field of developmental biology.
    “People have always wanted to work at the level of the cell, but the technology was lacking,” she said. “Now that we have all of these sequenced genomes, and now that we have these tools that allow us to compartmentalize individual cells, this seems like the best time to challenge ourselves as researchers to understand the nitty-gritty details we weren’t able to assay before.”

    A gold leaf paint and ink depiction of the Plasmodium falciparum lifecycle by Alex Cagan.
    Human disease-relevant scRNA-seq is not just for vertebrates. For example, a team of researchers at the Wellcome Sanger Institute are working on developing a Malaria Cell Atlas. Their goal is to use single-cell technology to produce gene activity profiles of individual malaria parasites throughout their complex lifecycle. “The sequencing data we get allows us to understand how the parasites are using their genomes,” said Adam Reid, Ph.D., a senior staff scientist at the Sanger. In March 2018, the team published the first part of the atlas, detailing its results for the blood stage of the Plasmodium lifecycle in mammals. Reid contends these results will change the fight against malaria. “Malaria research is a well-funded and very active area of research. We’ve managed to get quite a bit of understanding of how the parasite works. What single-cell analysis is doing is allowing us to better understand the parasite in populations. We thought they were all doing the same thing. But, now we can see they are behaving differently.”

    The ability to amplify very small amounts of RNA was the key innovation for malaria researchers. “When I started doing transcriptome analysis 10 years ago, we needed to use about 5 micrograms of RNA. Now, we can use 5 pico grams, 1 million times less,” Reid said. That innovation allows scientists like Reid to achieve unprecedented levels of resolution in their work. For Reid, increased resolution means there is hope that science will be able to reveal how malaria evades the immune system in humans and how the parasites develop resistance to drugs. Reid predicted the Atlas will serve as the underpinning for work by those developing malaria drugs and vaccines. “They will know where in the life cycle genes are used and where they are being expressed,” he said. Drug developers can then target those genes. The Atlas should be complete in the next two years, Reid added.
    In the meantime, Reid and his colleagues are focused on moving their research from the lab to the field, particularly to Africa. “We want to look at these parasites in real people, in real settings, in real diseases states,” he explained. Having access to fresher samples is one reason to take the research into the field. “The closer we can get to the disease, the better chance we have of making an impact.” Reid anticipates that RNA-seq technology is on the verge of being portable enough to go into the field (see Preparing scRNA-seq for the Clinic & the Field). Everything from instrumentation to software is developing rapidly, he said. Reid also said that the methods used to understand the malaria parasite will likely be used to understand and create atlases for other disease vectors.

    Path Ahead

    It is clear to those using single-cell analysis in basic research that the path ahead includes using the techniques in the clinic. “As the technologies become more stable, there will be a lot of opportunities for clinical applications,” Navin said. These include early detection by sampling for cancer markers in urine, prostate fluid, and the like. It also includes non-invasive monitoring of rare circulating tumor cells, as well as personalizing treatment decisions using specific markers. These methods will be particularly useful in the case of samples that today would be labeled QNS, or ‘quantity not sufficient.’ “Even with QNS samples, these methods allow you to get high-quality datasets to guide treatment decisions.” 




    0 0





















    The complete Global Firepower list for 2018 puts the military powers of the world into full perspective.

    The finalized Global Firepower ranking relies on over 55 individual factors to determine a given nation's PowerIndex ('PwrIndx') score. Our unique formula allows for smaller, more technologically-advanced, nations to compete with larger, lesser-developed, ones. Modifiers (in the form of bonuses and penalties) are added to further refine the list. Some qualities to observe in regards to the finalized ranking:

    + Ranking does not rely solely on the total number of weapons available to any one country (though it is a factor) but rather focuses on weapon diversity within the number totals to provide a better balance of firepower available. For example, fielding 100 minesweepers does not equal the strategic / tactical value of fielding 10 aircraft carriers.

    + Nuclear stockpiles are NOT taken into account but recognized / suspected nuclear powers do receive a bonus.

    + First World, Second World, and Third World statuses are taken into account.

    + Geographical factors, logistical flexibility, natural resources, and local industry influence the final ranking.

    + Available manpower is a key consideration; nations with large populations tend to rank higher due to the availability of personnel for supporting both war and industry.

    + Land-locked nations are NOT penalized for lack of a navy, however, naval powers ARE penalized for lack of diversity in available assets. For example, 100 patrol boats does not equate the same advantage that fielding 4 guided-missile frigates and 2 nuclear-attack submarines does.

    + NATO allies receive a slight bonus due to the theoretical sharing of resources should one of the members commit to war.

    + Financial stability / strength is taken into account as finances represent one of several important factors in running a successful campaign.

    + Current political / military leadership is NOT taken into account as this can be highly subjective and not necessarily influence in-the-field indivudal combat performance.

    For 2018 there are a total of 136 countries included in the GFP database. New to 2018 are Ireland, Montenegro, and Liberia.

    Arrow graphics correspond to each nation's placement against the previous year's list. Green Arrows indicate an increase in rank whilst Red Arrows reflect a decline. Gray 'Double Arrows' reflect no change in ranking; this does not necessarily indicate that no changes occurred across individual values but more so that changes were not great enough to affect year-over-year ranking. Increases/declines are based on many factors and can be related to attrition, financial instability, population fluxes and the like.

    View the Global list

    View India's Firepower Details 

    View comparison between the fire power of China and India

    0 0






    Diversity means different things to different people. In a study of 180 Spanish corporate managers, we explored perceptions of diversity and found that depending on who is answering, diversity usually means one of three things: demographic diversity (our gender, race, sexual orientation, and so on), experiential diversity (our affinities, hobbies, and abilities), and cognitive diversity (how we approach problems and think about things). All three types shape identity — or rather, identities.

    Demographic diversity is tied to our identities of origin — characteristics that classify us at birth and that we will carry around for the rest of our lives. Experiential diversity is based on life experiences that shape our emotional universe. Affinity bonds us to people with whom we share some of our likes and dislikes, building emotional communities. Experiential diversity influences we might call identities of growth. Cognitive diversity makes us look for other minds to complement our thinking: what we might call identities of aspiration.

    It is important to remember that categories only serve the purpose of classification; in the real world, differences between these categories are blurred. Diversity is dynamic. But we believe this diversity framework, though somewhat artificial (as all frameworks are) can be useful to companies who are trying to refresh their approach to managing diversity. What kind of diversity does your company focus on? Could you benefit from broadening your perspective? Let’s take a closer look at each in turn.

    Managing identities of origin. Since the 1980s, most global companies have developed diversity and inclusion policies led by human resources. The most frequent include: assessment tools (climate surveys, statistics monitoring, minority targets), human resources programs (flexible policies, mentoring or coaching), communication campaigns, and training programs.

    Consider Sodexho. In 2002 the company hired a chief diversity officer, Anand Rohini, to make diversity a priority. Some of the diversity priorities at Sodexho focused on gender, ethnicity, disabilities, and age. Its diversity strategy included a series of systems and processes covering human resources policies (such as flexibility measures, training, selection processes and career services); diversity scorecards; and quantitative targets, mainly regarding numbers of women and minorities, not only in the organization in general but also in leadership positions. By 2005 Sodexho was widely recognized as a diversity champion. For more than a decade it has been consistently ranked among the best of the DiversityInc top 50 list, and Anand Rohini has been widely recognized as a global diversity champion.

    For Sodexho and other companies taking a similar approach, the result is an enhanced company image and reputation. Talented individuals in general, but from minorities in particular, select companies in which they expect to feel appreciated.

    Managing identities of growth. Identities of growth often provide us with a feeling of security. Our likes and dislikes change over time, and so our affinity groups change. Identities of growth dictate who we spend time with.

    Many companies have developed friendship-based communities among employees, typically organizing activities such as weekends away, departmental Christmas parties, and so on, in a bid to create emotional ties between workers and the company. But because emotional communities are held together as much by the likes as by the dislikes of members, they can be unpredictable and difficult to manage in the long term. As a result, these emotional communities can sometimes work to the benefit of organizations, but they can just as often end up having the opposite effect, particularly when people share a dislike for certain policies, a particular boss, or for what they consider to be an unfair situation.

    Our research suggests that the best policy for dealing with communities of growth is through minimum intervention. Emotional communities will emerge in organizations, whether management likes it or not, and will have a life of their own. For that reason it is best to take a neutral position. Creating affinity groups is positive for the company. But these groups should always be voluntary and develop at their own pace, without management interference.

    Managing identities of aspiration. Our cognitive differences find their place in a community of aspiration. In those communities, we are valued for our unique way of understanding and interpreting the world. A community of aspiration is a space where our ideas are valued for their contribution to a common project, regardless of our different traits or individual likes or dislikes.

    Innovative organizations are shifting from managing units to managing challenges or projects, asking employees to voluntarily join projects, creating structures where employees can move out of their comfort zones to join temporary communities of aspiration that strengthen cross-organizational ties and help the company achieve its strategic goals.

    Corporate experience shows that the most effective strategy for companies to manage communities of aspiration is to create the contexts and the projects for them to emerge.

    Valve Corporation, a video game developer, has defined a unique corporate structure with no bosses or managers at all. Each member of the company is invited to define their contribution to the company according to their choices and preferences. A highly talented developer specialized in graphics animation might choose to work on a game by assuming a “group contributor role,” becoming part of the group developing that game.

    After finishing this “group contribution,” the same person might choose to work in a more individualistic fashion on the next task. This “free to choose” approach is mirrored in the firm’s office design. Valve offices incorporate wheeled desks to foster mobility and allow the fast configuration and reconfiguration of groups as well as individual work.

    Understanding multiple types of diversity is particularly relevant in our tribal times. Individuals now construct identities consciously. We want to play with a multiplicity of identities and use them in as many different roles as their different affiliations allow.

    We live in complex times, when complex solutions are need it and where a one solution for all approach no longer works. Each form of diversity is different and requires its own management strategy to effectively integrate people. Diversity is a journey and, like any journey, requires careful navigation.

    View at the original source

    0 0



    A first-generation entrepreneur and a thought leader, Ms Shaw is ranked among the world's most influential people in bio-pharma.
    s Shaw is also on the board of directors of the US-India Business Council.
    Bengaluru: 
    India's biotech queen Kiran Mazumdar Shaw got elected as a full-term member of the MIT Corporation, the Board of Trustees of the Massachusetts Institute of Technology (MIT), her company Biocon announced on Thursday.
    "Shaw is among the eight members who will serve the five-year term on the Board from July 1," the Bengaluru based biotech firm in a statement.
    Ms Shaw, 65, is a pioneer of the Indian biotech sector and founder-chairperson of Biocon, a global drug maker for affordable and accessible healthcare.
    "I am honoured to be elected as a full-time member of the MIT Board and look forward to contributing to its journey of making a difference in solving challenges of the world," she said.
    A first-generation entrepreneur and a thought leader, Ms Shaw is ranked among the world's most influential people in bio-pharma by Fierce Biotech, Forbes magazine's 'World's 100 Most Powerful Women' and Fortune's 'Top 25 Most Powerful Women in the Asia-Pacific region'.
    She has also been ranked number one in the Business Captains category on 'Medicine Maker Power List' 2018, an index of the 100 most influential people the world over in medicine where she has been among the top 10 since 2015.
    "It is inspiring to be a part of a premiere research university like MIT, which is engaged in advancing knowledge, leveraging science and technology to address fundamental human needs for food, shelter, energy, transportation and social harmony," said Ms Shaw, who holds key positions in educational, industrial and government bodies, including expert committees of the Department of Biotechnology and governing councils of its institutes.
    She is also on the board of directors of the US-India Business Council and the board of trustees of the Keck Graduate Institute at California's Claremont.
    Ms Shaw was elected a foreign member of the Royal Swedish Academy of Engineering Sciences in 2006.
    She has established a 1,400-bed medical center in Bengaluru to deliver affordable cancer care to patients and a non-profit research institute dedicated to treating diseases.
    Ms Shaw graduated from Bangalore University and has a master's from Ballarat College of Melbourne University in Australia.
    Ranked among the world's leading universities, the 157-year-old MIT is an independent, co-educational and privately-endowed institution, with 1,000 faculty members, 11,000 under-graduate and post-graduate students and 130,000 living alumni.


    0 0
































    Shyam's take....

    Bill Gates's next investment in Alzheimer’s research is in a new fund called Diagnostics Accelerator. This project of the Alzheimer’s Drug Discovery Foundation (ADDF) aims to accelerate bold new ideas for earlier and better diagnosis of the disease. Bill Gates is planning to invest more than 30 million for this cause. 


    It is not for the first time, that research in Alzheimer disease management or prevention research has received attention or funding. However, when a person like Bill Gates devotes time and funds for a cause, the cause itself receives widespread attention from people all over the world. The awareness for the cause increases manifold. It gives direction to many philanthropists as to which cause they should invest. It simultaneously encourages the devoted scientists, researchers and the medical professionals working for this cause. These people not only find light at the end of the tunnel, they feel the entire tunnel has brightened up. This way Bill Gates involvement, more than the investment proves to be driver and huge catalyst. For me, his devoting time for the cause, is more important than his investment. I also like his idea of  venture philanthropy, it could mean that research could at least fund itself partially and the end product could bring back some returns for the investors or help create a corpus, that could fund further research.  Kudos Mr. Gates.

    Now please read the article.....

    When I announced that I was investing in Alzheimer’s research for the first time last fall, I thought I knew what to expect. I knew I would get to engage more deeply with the brilliant scientists and advocates working to stop Alzheimer’s—and I haven’t been disappointed. The things I’ve seen over the last seven months make me more hopeful than ever.


    What I didn’t see coming was the amazing response I got from the Alzheimer’s community at large. Because my family didn’t talk publicly about my dad’s diagnosis before the announcement, I had yet to experience how remarkable the support community is. So many of you have shared your personal experiences with me, both in person and online (including here on TGN). It helps to hear from others who are going through the same thing.


    Alzheimer’s research is a frontier where we can dramatically improve human life—both the lives of people who have the disease and their loved ones. I’m optimistic that we can substantially alter the course of Alzheimer’s if we make progress in several key areas. One of the biggest things we could do right now is develop a reliable, affordable, and accessible diagnostic.


    The process of getting diagnosed with Alzheimer’s today is less than ideal. It starts with a cognitive test. If you don’t perform well, your doctor needs to rule out all other possible causes for memory loss, like stroke or a nutritional deficiency. Then your doctor can order a spinal tap or PET scan to confirm you have Alzheimer’s. Although these tests are fairly accurate, the only way to diagnose the disease definitively is through an autopsy after death.


    There are two big problems with this process. First, it can be expensive and invasive. Most insurance plans in the United States won’t reimburse tests for Alzheimer’s. Patients often pay thousands of dollars out of their own pockets. Meanwhile, spinal taps can be scary and uncomfortable, and PET scans require the patient to stay perfectly still for up to 40 minutes. That’s difficult for anyone to do—but especially someone with Alzheimer’s.


    Second, patients aren’t being tested for the disease until they start showing cognitive decline. The more we understand about Alzheimer’s, the clearer it becomes that the disease begins much earlier than we previously thought. Research suggests Alzheimer’s starts damaging the brain more than a decade before symptoms start showing. That’s probably when we need to start treating people to have the best shot at an effective drug.


    This delay is a huge problem in the quest for a scientific breakthrough. It’s currently so difficult to find enough eligible patients for a clinical trial that it can take longer to enroll participants than to conduct the study. We need a better way of diagnosing Alzheimer’s—like a simple blood test or eye exam—before we’re able to slow the progression of the disease.  


    It’s a bit of a chicken and egg problem. It’s hard to come up with a game changing new drug without a cheaper and less invasive way to diagnose patients earlier. But most people don’t want to find out if they have the disease earlier when there’s no way to treat it. The commercial market for Alzheimer’s diagnostics simply isn’t there. There’s promising research being done, but very few companies are looking at how to turn that research into a usable product.


    That’s why my next investment in Alzheimer’s research is in a new fund called Diagnostics Accelerator. This project of the Alzheimer’s Drug Discovery Foundation (ADDF) aims to accelerate bold new ideas for earlier and better diagnosis of the disease. Today I’m joining Leonard Lauder, ADDF, the Dolby family, the Charles and Helen Schwab Foundation, and other donors in committing more than $30 million to help launch Diagnostics Accelerator.


    Diagnostics Accelerator is a venture philanthropy vehicle, which means it’s different from most funds. Investments from governments or charitable organizations are fantastic at generating new ideas and cutting-edge research—but they’re not always great at creating usable products, since no one stands to make a profit at the end of the day. Venture capital, on the other end of the spectrum, is more likely to develop a test that will actually reach patients, but its financial model favors projects that will earn big returns for investors.


    Venture philanthropy splits the difference. It incentivizes a bold, risk-taking approach to research with an end goal of a real product for real patients. If any of the projects backed by Diagnostics Accelerator succeed, our share of the financial windfall goes right back into the fund.


    My hope is that this investment builds a bridge from academic research to a reliable, affordable, and accessible diagnostic. I expect to see lots of new players come to the table, who have innovative new ideas but might not have previously had the resources to explore them. If you think you’re one of these bold thinkers, we want to hear your great ideas. I encourage you to apply for funding on the new Diagnostics Accelerator website here.


    Imagine a world where diagnosing Alzheimer’s disease is as simple as getting your blood tested during your annual physical. Research suggests that future isn’t that far off, and Diagnostics Accelerator moves us one step closer.


    0 0




    Every one of India’s 1.3 billion people uses an average 11kg of plastic each year. After being used, much of this plastic finds its way to the Arabian Sea and Indian Ocean, where it can maim and kill fish, birds and other marine wildlife.

    Fisherman in India’s southern state of Kerala are taking on the battle to cut the level of plastic waste in the oceans.

    When the trawlers drag their nets through the water, they end up scooping out huge amounts of plastic along with the fish. Until recently the fishermen would simply throw the plastic junk back into the water.

    But last summer Kerala’s fisheries minister J. Mercykutty Amma started a scheme to change this. Under her direction, the state government launched a campaign called Suchitwa Sagaram, or Clean Sea, which trains fishermen to collect the plastic and bring it back to shore.

    In Suchitwa Sagaram’s first 10 months, fisherman have removed 25 tonnes of plastic from the Arabian Sean, including 10 tonnes of plastic bags and bottles, according to a UN report on the scheme.

    From waste to roads

    Once all the plastic waste caught by the Keralan fishermen reaches the shore, it is collected by people from the local fishing community - all but two of whom are women - and fed into a plastic shredding machine.

    Like so many of India’s plastic recycling schemes, this shredded plastic is converted into material that is used for road surfacing.

    There are more than 34,000km of plastic roads in India, mostly in rural areas. More than half of the roads in the southern state of Tamil Nadu are plastic. This road surface is increasingly popular as it makes the roads more resilient to India’s searing heat. The melting point for plastic roads is around 66°C, compared to 50°C for conventional roads.
    Using recycled plastic is a cheaper alternative to conventional plastic additives for road surfaces. Every kilometre of plastic road uses the equivalent of a million plastic bags, saving around one tonne of asphalt. Each kilometre costs roughly 8% less than a conventional road.

    And plastic roads help create work. As well as the Keralan fishing crews, teams of on-land plastic pickers across India collect the plastic waste. They sell their plastic to the many small plastic shredding businesses that have popped up across the country.

    Plastics ban

    The need for schemes such as Suchitwa Sagaram is emphasised by research that shows 90% of the plastic waste in the world’s oceans is carried there by just 10 rivers - two of which are in India.

    According to a study by the Helmholtz Centre for Environmental Research, India’s Indus and Ganges rivers carry the second and sixth highest amounts of plastic debris to the ocean. The Indian Ocean, meanwhile, is choked with the second highest amount of plastic out of all of the world’s oceans. 



    Like Kerala’s fisheries minister, Indian politicians appear to be taking action in the face of this mounting crisis.

    This month India’s prime minister Narendra Modi pledged to eliminate all single-use plastic in the country by 2022, starting with an immediate ban in urban Delhi.
    The move came just three months after India’s western state of Maharashtra issued a ban virtually all types of plastic bag, disposable cutlery, cups and dishes, as well as plastic containers and packaging.

    Residents face fines from 5,000 rupees (US$73) for a first time offence to 25,000 rupees ($367) and jail time for repeat offenders, while the state’s Environment Department is also encouraging people to recycle bottles and milk bags through a buy-back scheme.
    While’s India’s plastic problem is substantial due to the size of its population and its rate of economic growth, schemes such as those in Maharashtra, Delhi and Kerala set an example to western nations.



    In the US, for example each person on average generates up to 10 times the amount of plastic waste generated by their Indian counterpart.

    If western nations followed India’s lead of combining political pressure with entrepreneurial ventures, perhaps the world will stand of a chance of avoiding the predicted catastrophe of there being more plastic than fish in the sea by 2050





    0 0






    The idea that science skills are innate and great discoveries are made only by “lone geniuses” is losing traction in STEM. 

    Before Lauren Aguilar began her freshman year of college, she had dreams of becoming a neuroscientist. She remembers sitting in a lecture hall for her very first course, Chemistry 101. The professor had required the students to read the first chapter of the textbook before arriving. As someone with a passion for STEM who had excelled in high school, Aguilar had been confident the course was going to go well.

    But then she, a Latina woman, looked around the room. She didn’t see many people who looked like her, either women or men or women of color. “The seed of doubt was planted right then,” she says. “If there aren’t people like me here, then maybe this field isn’t for people like me.”

    The professor began the class with a demand: Anyone who didn’t understand everything in the first chapter perfectly should immediately drop the class.

    “I said, well, I didn’t understand everything perfectly, so this isn’t for me,” she says. “And right then and there I dropped that course and dropped that major. That one experience absolutely changed the course of my career.” 
    This out-of-place feeling is not uncommon in STEM and contributes to the lack of diversity in STEM fields. The NSF’s 2018 STEM Inclusion Study showed that women and racial and ethnic minorities, as well as those who identify as LGBTQ and those with disability status, report more feelings of marginalization and experiences of exclusion in STEM fields compared to white men. 
    The experience didn’t derail Aguilar’s dreams of a career in STEM. Instead, it propelled her into another field: social psychology. She wanted to try to understand what leads some people to feel like they belong in certain fields where others don’t, and how that leads to things like career engagement, learning outcomes, teamwork and innovation. Aguilar is now a diversity and inclusion consultant, helping organizations, many of them STEM related, create cultures of inclusion and belonging.

    Breaking the mindset

    According to Micha Kilburn, director of Outreach and Education at the National Science Foundation’s Joint Institute for Nuclear Astrophysics Center for the Evolution of the Elements, people have been studying STEM education for as long as we’ve been doing science. But it wasn’t until recent decades that these studies became more formal. Since then, the field of STEM education studies has been on the rise, with studies done both in academia and in industry, many dealing with diversity, inclusion and intervention. 
    As part of her postdoctoral research at Stanford University, Aguilar collaborated with her advisor, Greg Walton, an associate professor in the department of psychology, and Nobel Laureate Carl Wieman, a professor in the department of physics and in the Graduate School of Education, to bring insights about STEM education to the field of physics and give educators tools to increase diversity in the field. In 2014, they published a paper called “Psychological insights for improved physics teaching” in Physics Today.
    One important insight drawn in the paper, Aguilar says, is the idea of a “growth mindset,” which originated with Stanford psychology professor Carol Dweck in her book Mindset
    “Growth mindset is a set of beliefs that talent, intelligence and skill can be grown and exercised like a muscle, rather than being fixed or innate, like eye color,” she says. “If you have a fixed mindset, the most important goal is to prove your intelligence at all costs. When you run up against dead ends or are struggling and putting a lot of effort into something, it threatens your view of your intelligence and makes you fear that other people might find you out. 
    “For people who have a growth mindset, effort is an exciting opportunity to learn and grow. It means you’re building that talent.”
    In her research, Dweck found that these two mindsets lead to different learning processes and outcomes, causing people to engage in learning in very different ways. 
    “Dweck has shown how different types of praise can produce different mindsets in children,” Wieman says. “A strong fixed mindset in a learner, teacher or parent is very much a self-fulfilling prophecy if nothing is done to intervene. The belief that you cannot succeed, and prominent authority figures telling you that you cannot succeed, is very effective at ensuring most people will not be successful at a challenging task. Even relatively small interventions can shift students of all ages from a fixed to a more growth mindset, and their performance improves accordingly.”


    Genius culture
    According to Aguilar, studies have shown that fixed mindsets are much more prevalent in STEM fields than in liberal arts.
    “Something that’s problematic for STEM is this idea of a lone genius scientist,” she says. “It’s a stereotype about how work gets done that really leads people who don’t fit that stereotype to feel like they don’t belong.”
    In more mathematical sciences such as physics, Wieman says, the idea that the skills required to succeed are innate is particularly persistent. 
    “These beliefs are most strongly linked to math in our society,” Wieman says. “At some point it became fashionable to be ‘stupid’ in math and science. Rather than saying you or your child isn’t working hard enough and that’s why they’re doing poorly in math, you can say ‘he just doesn’t have a brain that is good for math.’”
    Allison Olshefke, a recent physics graduate from the University of Notre Dame, believes that the idea that physics skills are innate has a lot to do with the history of the field. 
    “I think there’s just this historical idea that the people who have made it really big in physics and have lasted through the ages were just inherently brilliant,” Olshefke says. “So that became what was valued as what was needed to make those kinds of contributions. 
    “And that just reinforces itself. The people who show promise earlier on in physics without having to work as hard for whatever reason are going to be encouraged more from the beginning, and that encouragement is going to keep them going. And then we learn from that experience to encourage those same types of people in the next generation.”
    But despite the pervasiveness of the idea that STEM skills are innate, discoveries in science are more often than not a product of hard work and collaboration, as evidenced by the recent discoveries of gravitational waves and the Higgs boson by experiments made up of thousands of scientists each. And, Olshefke adds, it’s not as if people are born with the ability to do calculus.
    “The idea that math is language you need to learn to speak goes along with the growth mindset,” Olshefke says. “If you’re learning a new language, it’s going to look and sound completely unintelligible to you when you begin, but then as you work and practice, it’s going to get easier to understand.”
    In an article called ‘The cult of genius,’ Julianne Dalcanton of the University of Washington says that in physics, there’s no more damning phrase than saying someone is a “hard worker.” In general, Kilburn says, our society is much more likely to view white and Asian men as brilliant, and women and other underrepresented minorities as hardworking.
    “This idea that you have to be born a genius or born with talent hits the fields that are more mathematically inclined, in particular physics,” Kilburn says. “Physics, in particular particle theory, is at the far edge of the mindset that innate brilliance is the most important quality required to succeed. There have been published studies that show the more the field values brilliance or innate talent over dedication, the fewer women and underrepresented minorities that they have.”

    Hidden biases and combatting stereotypes

    Olshefke, who will soon begin a graduate program at Notre Dame to become a high school math teacher, spent a lot of her undergraduate career doing physics education research. Olshefke met Kilburn at a luncheon and found that the questions she was asking about gender diversity in physics and STEM resonated with her own experiences as a woman pursuing physics.
    Olshefke became involved with a study Kilburn was doing in which they evaluated letters of recommendation written by high school teachers. They had seen in previous research that in academic letters of recommendation, there are language differences based on the gender of the applicant. 
    “We wanted to find out if these implicit biases extended into high school letters of recommendation as well, since these letters of recommendation are written at a crucial time when students are applying to colleges and programs,” Olshefke says. “We wanted to make sure that everybody is getting recommended in a way that’s going to create an equal playing field for admittance into programs for STEM.”
    They looked at letters of recommendation high school teachers had written for Notre Dame’s high school programs from 2013 to 2017. They looked through more than 1700 applications, pulling out words from categories that had been pointed out in previous research to try to identify differences between letters written for men and women. 
    “We ended up really only focusing on two of the categories: grindstone words and ability words,” Olshefke says. “Grindstone words describe students as working hard, putting in a lot of effort, while ability words describe natural talent and innate skill. This idea that women are described as working hard more often and men were more likely to be described as innately talented was reflected in the letters that we read. 
    “Yet when we looked at the quantitative portion of the recommendation where teachers rated students in different categories, women and men were rated identically throughout all of those. So we saw this disconnect between how teachers are quantitatively rating their students and how they're qualitatively describing their students.”
    A fixed mindset can keep programs from admitting a diverse pool of candidates, and it can also drive candidates away, Aguilar says. When a STEM field or a particular STEM department, research center or firm espouses a fixed mindset, research shows that women and underrepresented minorities feel less trust in that organization. 
    “They’re worried about not belonging,” she says. “They’re worried that they're going to be seen through the lens of a stereotype. Stereotypes are really just fixed perceptions of people.”
    This sentiment resonates strongly with Olshefke, who was one of only three women physics majors in her year.
    “As a woman in STEM,” she says, “you’d be less likely to raise your hand and ask a question during lecture because you didn’t want to reflect badly on women in physics. You’d be more afraid to go to office hours. You’d be worried people would think, ‘Oh, women don't understand things as quickly as men.’ Even though nobody is blatantly excluding you from doing anything, there’s still a little bit more fear because you’re different from everyone else.”
    Olshefke remembers a time in high school when she was passed up for an “outstanding physics student” award because her teacher felt she didn’t ask enough questions in class. 
    “I was the only girl in my class, so I wasn’t comfortable asking questions,” she says. “There was just a lack of understanding of what I was feeling in the class. I think it speaks to the same kind of lack of knowledge about how women and men are experiencing different worlds as they go through physics.”


    Changing the face of STEM

    One way to confront the issue of inequalities in STEM is by having conversations about the experiences of women and underrepresented minorities in physics. 
    “There needs to be a discussion of experiences and what the issues actually are,” Olshefke says. “Having an open classroom and a supportive teacher who’s willing to talk about the issues that their students are going through will make a huge difference. It matches up really well with the growth mindset.”
    When organizations have this growth mindset, Aguilar says, individuals from underrepresented backgrounds feel like they are going to be seen as individuals, not stereotypes, and respected and valued for their own contributions. They feel like they’ll have a chance to learn and grow.
    “Decades of research has shown us that a growth mindset leads us to be more effective learners, teachers and managers, as well as creates a culture of inclusion and diversity in our STEM education centers,” she says. “Our brains develop and grow new neuronal connections every day. So if we believe in neuroplasticity, we need to believe in the growth mindset.”
    Aguilar adds that the research has shown that diversity leads to better decision-making and more innovation. She cites a research study done with juries that compared one jury of all white jurors to another of mixed races. The juries had been asked to listen to a case and make a decision at the end. The researchers found that the more racially diverse juries actually considered more of the facts of the case in their deliberation and reached a more accurate or fair decision. 
    “The reason was that each person felt like they couldn’t assume the perspective of everyone in the room,” she says. “They had to really think about each piece of information from all different angles and not make assumptions about what people would think or believe. It not only brings more ideas to the table, but it helps us challenge our own assumptions, be better thinkers and argue our points more clearly. It’s not just a nice-to-have, diversity is a must have to ensure that we make the best decisions and create the most innovative science.”


    Learning to appreciate physics

    In physics in particular, Kilburn says, having more diversity and inclusion could lead to new frames of thought and revolutions in our understanding of the universe.
    “We think of physics as a very objective science, but for something to be truly objective, you have to ask all the questions and look at it from all perspectives,” she says. “If you’re training everybody through the same system and choosing the same types of people, then you’re going to ask the same types of questions. You might miss out on some of those left-field questions that lead to huge breakthroughs. If we want to be a really objective science, we have to ask questions from all angles, which requires people from all different backgrounds.”
    Kilburn adds that creating a more inclusive culture in STEM will not just increase diversity in the fields but will also enable others to have an appreciation for it as well. 
    “As soon as you tell somebody that you’re a physicist,” she says, “some of the most common responses are ‘I hated that class,’ or ‘I could never do that, you’re so smart.’ All students enter and leave the field with different proficiencies, but they all are capable of learning and appreciating the subject more. 
    “The arts do this: Just because you couldn’t play the flute doesn’t mean you stopped listening to and appreciating music. I think that we don’t focus on physics appreciation as much as we could to combat that socially awkward loner genius stereotype.”
    According to Wieman, everyone, regardless of their career, will be able to make better decisions if they have some understanding of STEM and how to use it. 
    “Our way of life is so based on technology that one is regularly confronted by issues at work and home where STEM can help a person make better decisions,” he says. 
    “More importantly, mankind is faced with critical decisions about things like energy sources and use of resources that will impact our world and species far into the future. These issues are fundamentally technical at their heart, so a person cannot make wise decisions on these issues without a grasp of STEM.  If we want to preserve democracy and our world, we must have all students learn STEM better, which research shows is quite possible if we improve the way we teach.”

    0 0


    Shocking : Water  turns red,as nearby nickel processing plant says cause is decades of contamination in Soviet times which company is working to overcome.




    The ‘river of blood’ has appeared again, two years after the plant was fined by an eco-watchdog.

    Nornickel, owner of Nadezhdinsky factory, also known as Nadezhda Metallurgical Plant, says it has taken major action to clean-up the environment around Norilsk and in particular around the controversial Daldykan River.

    Nikolay Utkin, director of the Polar Division of Norilsk Nickel, told The Siberian Times: 'The reason why the water in the Doldykan River got the reddish-brown colour is the active melting of snow, a powerful flood, which washes off the stuff that had been gathering here for decades.

    'Unfortunately, during the Soviet era, the environmental issues were resolved on a residual basis. 
    'At present, Norilsk Nickel is making serious efforts to solve this problem.'
    Critics have suggested a dammed slurry lake appears to be leaking again into the Daldykan River, so much so that the sight is visible from space satellites, but this is denied by the company. 
    In the lake are ‘tailings’, the residue of ore after processing, it has been reported. 
    The red tinge of the Arctic river, in the north of Krasnoyarsk region, is also seen on a video that has appeared on local news sites.

    The press service of the plant said there was 'no emergency' but initially did not comment further, said local reports. 

    Nornickel empire, better known as Norilsk Nickel, has $16.5 billion in assets.




    Dammed slurry lake appears to be leaking again into the Daldykan River, so much so that the sight is visible from space satellites. 

    In 2016 ecological watchdog - Rosprirodnadzor - announced an 'administrative fine' had 
    been imposed over the ‘river of blood’.

    The exact amount of the fine was not divulged but it could not be higher than 40,000 roubles - or $650. 

    The fine was criticised at the time as too small but there were also understandings that the problem of pollution leaking into the river would be solved. 



    Sergey Dyachenko, Chief Operating Officer of Nornickel, was on record saying: ‘We hope that it will not happen in future.’

    Now Mr Uktin has revealed: 'In 2016, the slurry pipeline from Nadezhdinsky Metallurgical Plant to tailings was completely replaced, which helped to eliminate the main cause of the (problem).

    'Daily monitoring of this facility allows us to say that there are no leaks. 

    'For two years of intensive use the pipeline has proved its reliability.

    'The results of environmental monitoring of the territory, both visual and in the lab, confirm a significant improvement compared to the situation during the flood in the past year.

    'In 2017, as part of the implementation of the three-year plan for the containment and cleaning of the spills, Norilsk Nickel began to clean up the area adjacent to the pipeline. 



    'In total, 84,000 tons of spills, a considerable amount of scrap metal and various garbage were removed. 

    'Totally more than 2 million square metres (227.89 hectares) was cleared. 

    'The cost of the cleaning in 2017 amounted to about 150 million roubles ($2.4 million).

    'In the years 2018-2019  more than 220 million roubles ($3.5 million) will be allocated to the cleaning of the long-term pollution. 

    'We expect that, after carrying out all the planned works, the colouring of the Doldykan River in the period of heavy rains or high water will be minimised.' 

    Regional deputy leader Anatoly Samkov warned in 2016: ‘Such violations are one of the facts of environmental mismanagement. 

    'The administrative sanctions imposed on the company cannot serve as a serious preventive measure and we will demand the tightening of existing legislation for such cases. 

    'We will keep the situation under control.'

    Previously Alexei Kiselyov, of Greenpeace Russia, blamed iron salts.



    'It was impossible to say if there was damage to local fauna without investigating the site, he said. 
    'Results of the tests are needed,' he said.

    Locals say the river frequently turned red in Soviet times, but there were no eco-campaigners then to point to environmental damage. 

    'It spring, that means the Daldykan turns red', said one longtime resident. 
    Others questioned the company's claims that the water was clean.



    0 0




    All humans begin life as a single cell that divides repeatedly to form two, then four, then eight cells, all the way up to the 26 billion or so that make up a newborn. Tracing how and when those 26 billion cells arise from one zygote is the grand challenge of developmental biology, a field that so far has only been able to capture and analyze snapshots of the development process.

    Now, a new method developed by scientists at the Wyss Institute and Harvard Medical School (HMS) brings that task into the realm of possibility using evolving genetic barcodes that record the process of cell division in developing mice, enabling the lineage of every cell in a mouse’s body to be traced back to its single-celled origin.

    The research is published today in Science as a First Release article.

    “Current lineage-tracking methods can only show snapshots in time, because you have to physically stop the development process to see how the cells look at each stage, almost like looking at individual frames of a motion picture,” said senior author George Church, who is a core faculty member at the Wyss Institute, professor of genetics at HMS, and professor of health sciences and technology at Harvard and MIT. “This barcode recording method allows us to reconstruct the complete history of every mature cell’s development, which is like playing the full motion picture backwards in real time.”

    The genetic barcodes are created using a special type of DNA sequence that encodes a modified RNA molecule called a homing guide RNA (hgRNA), which was described in a previous paper. The hgRNA molecules are engineered such that when the enzyme Cas9 (of CRISPR-Cas9 fame) is present, the hgRNA will guide the Cas9 to its own hgRNA sequence in the genome, which Cas9 then cuts. When the cell repairs that cut, it can introduce genetic mutations in the hgRNA sequence, which accumulate over time to create a unique barcode.
    The researchers implemented the hgRNA-Cas9 system in mice by creating a “founder mouse” that had 60 different hgRNA sequences scattered throughout its genome. They then crossed the founder mouse with mice that expressed the Cas9 protein, producing zygotes whose hgRNA sequences started being cut and mutated shortly after fertilization.

    “Starting with the zygote and continuing through all of its progeny, every time a cell divides there’s a chance that its daughter cells’ hgRNAs will mutate,” explained first author Reza Kalhor, a postdoctoral research fellow at the Wyss Institute and HMS. “In each generation, all the cells acquire their own unique mutations in addition to the ones they inherit from their mother cell, so we can trace how closely related different cells are by comparing which mutations they have.”

    Each hgRNA can produce hundreds of mutant alleles; collectively, they can generate a unique barcode that contains the full developmental lineage of each of the approximately 10 billion cells in an adult mouse.

    The ability to continuously record cells’ development also allowed the researchers to resolve a longstanding question regarding the embryonic brain: Does it distinguish its front from its back end first, or its left from its right side? By comparing the hgRNA mutation barcodes present in cells taken from different parts of two mice’s brains, they found that neurons from the left side of each brain region were more closely related to neurons from the right side of the same region than to neurons from the left side of neighboring regions. This result suggested that front-back brain patterning emerges before left-right patterning in central nervous system development.

    “This method allows us to take the final developmental stage of a model organism and from there reconstruct a full lineage tree all the way back to its single-cell stage. It’s an ambitious goal that will certainly take many labs several years to realize, but this paper represents an important step in getting there,” said Church.

    The researchers are now focusing on improving their readout techniques so that they can analyze the barcodes of individual cells and reconstruct the lineage tree that has been recorded.

    “Being able to record cells continuously over time is a huge milestone in developmental biology that promises to exponentially increase our understanding of the process by which a single cell grows to form to an adult animal and, if applied to disease models, it could provide entirely new insights into how diseases, such as cancer, emerge,” said Donald Ingber, director of the Wyss Institute. Ingber is also the Judah Folkman Professor of Vascular Biology at HMS and the vascular biology program at Boston Children’s Hospital, and professor of bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences.

    Additional authors of the paper include Kian Kalhor from Sharif University of Technology in Tehran, Iran; Leo Mejia from HMS; Kathleen Leeper and Amanda Graveline from the Wyss Institute; and Prashant Mali, associate professor at the University of California, San Diego.

    This research was supported by the National Institutes of Health and the Intelligence Advanced Research Projects Activity.

    View at the original source

    0 0



    Here’s how to assemble your personal dream team, with tips from business expert Anthony Tjan.


    Everyone can use a mentor. Scratch that — as it turns out, we could all use five mentors. “The best mentors can help us define and express our inner calling,” says Anthony Tjan, CEO of Boston venture capital firm Cue Ball Group and author of Good People. “But rarely can one person give you everything you need to grow.”


    In this short list, Tjan has identified the five kinds of people you should have in your corner. You probably already know them — and it’s possible for one person to cover two or more categories — so use this list as both a guide and a nudge to deepen your bond with them.


    One reminder from Tjan: Mentorship is a two-way street — a relationship between humans — and not a transaction. So don’t just march up to people and ask them to advise you. Take the time to develop genuine connections with those you admire, and assist them whenever you can.

    Mentor #1: The master of craft


    “If you know you want to be the best in your field — whether it’s the greatest editor, football quarterback, entrepreneur — ask, Who are the most iconic figures in that area?” says Tjan. This person can function as your personal Jedi master, someone who’s accumulated their wisdom through years of experience and who can provide insight into your industry and fine-tuning your skills. Turn to this person when you need advice about launching a new initiative or brainstorming where you should work next. “They should help you identify, realize and hone your strengths towards the closest state of perfection as possible,” he says.

    Mentor #2: The champion of your cause


    This mentor is someone who will talk you up to others, and it’s important to have one of these in your current workplace, says Tjan: “These are people who are advocates and who have your back.” But they’re more than just boosters — often, they can be connectors too, introducing you to useful people in your industry.

    Mentor #3: The copilot


    Another name for this type: Your best work bud. The copilot is the colleague who can talk you through projects, advise you in navigating the personalities at your company, and listen to you vent over coffee. This kind of mentoring relationship is best when it’s close to equally reciprocal. As Tjan puts it, “you are peers committed to supporting each other, collaborating with each other, and holding each other accountable. And when you have a copilot, both the quality of your work and your engagement level improve.”

    Mentor #4: The anchor


    This person doesn’t have to work in your industry — in fact, it could be a friend or family member. While your champion supports you to achieve specific career goals, your anchor is a confidante and a sounding board. “We’re all going to hit speed bumps and go through uncertainty in life,” says Tjan. “So we need someone who can give us a psychological lift and help us see light through the cracks during challenging times.” Because the anchor is keeping your overall best interests in mind, they can be particularly insightful when it comes to setting priorities, achieving work-life balance, and not losing sight of your values.

    Mentor #5: The reverse mentor


    “When we say the word ‘mentor,’ we often conjure up the image of an older person or teacher,” says Tjan. “But I think the counterpoint is as important.” Pay attention to learning from the people you’re mentoring, even though they may have fewer years in the workplace than you. Speaking from his own experience, Tjan says, “Talking to my mentees gives me the opportunity to collect feedback on my leadership style, engage with the younger generation, and keep my perspectives fresh and relevant.”






    0 0




    Researchers at the University of Alabama at Birmingham have shed light on an epigenetic reprogramming mechanism that underlies the development of ischemic cardiomyopathy.

    The finding, which has recently been published in Nature - Laboratory Investigation, is hoped to pave the way for personalized care for people with the condition.

    Ischemic cardiomyopathy, which is caused by restricted blood flow to the coronary arteries, is the most common form of congestive heart failure, yet it can only be managed with symptomatic treatment because the cause of the condition is not known.
    Now, Adam Wende and colleagues have shown that the epigenetic changes seen in ischemic cardiomyopathy probably reprogram the heart's metabolism and change the organ’s cellular remodeling.
    The team compared heart tissue samples taken from the left ventricles of five ischemic cardiomyopathy patients with tissue taken from six non-ischemic cardiomyopathy patients.
    One well-known epigenetic change is the addition to or removal of methyl groups from cytosine, one of the four main bases that make up DNA. Hypermethylation is associated with reduced gene expression and hypomethylation with increased gene expression.
    The team found that in the heart tissue of ischemic cardiomyopathy patients, there was an epigenetic signature that differed to the heart tissue of non-ischemic cardiomyopathy patients. This signature was found to represent a well-established metabolic change, where the heart switches from using oxygen to produce energy to an anaerobic mechanism.
    "Altogether, we believe that epigenetic changes encode a so-called 'metabolic plasticity' in failing hearts, the reversal of which may repair the ischemic and failing heart," says Wende.
    Wende and colleagues found that hypermethylation was associated with a decreased expression of the genes involved in oxidative metabolism. An upstream regulator of metabolic gene expression, KLF15, was found to be suppressed by an epigenetic regulator called EZH2. The team also found that genes involved in anaerobic glycolytic metabolism were hypomethylated.
    This involvement of EZH2 could potentially provide a molecular target for further research into precision-based heart disease treatments.

    0 0




    When Nichelle Obar learned she was pregnant with her second child last year, she never expected that her pregnancy, or her baby, would make history.

    But when the 40-year-old food-and-beverage coordinator from Hawaii and her fiancé Christopher Constantino went to their 18-week ultrasound, they learned something was wrong. The heart was larger than it should have been, and there was evidence that fluid was starting to build up around the organ as well. Both were signs that the fetus was working extra hard to pump blood to its fast-growing body and that its heart was starting to fail.
    Obar’s doctor knew what could be causing it. Obar and Constantino are both carriers of a genetic blood disorder called alpha thalassemia, which can lead to dangerously low levels of red blood cells. Red blood cells carry hemoglobin, which binds to oxygen and transports it from the lungs to feed other cells–so fewer red blood cells means low levels of oxygen in cells throughout the body. Neither parent is affected by the condition, but depending on how their genes combined, their children could be.
    When Obar was pregnant with their first child, Gabriel, the couple was told that if he had the disease, his prognosis would be grim. “The information we got was that most babies don’t survive, and if they do survive to birth, they might not live for too long,” Obar says. Gabriel was lucky. The DNA he inherited from his mom and dad did not endow his cells with enough of the mutation to make him sick. 
    But soon after that 18-week ultrasound, their second baby, a girl, was officially diagnosed with alpha thalassemia. “We were pretty devastated,” Obar says. They did not have many options: their daughter would need blood transfusions in utero just to improve her chances of being born, and even if she survived to birth, she might need regular transfusions for the rest of her life, relying on a healthy donor’s blood to make up for the low oxygen in her own.
    Their genetic counselor did have one other suggestion, but it was a long shot. She had just learned about a study at the University of California, San Francisco (UCSF), testing a daring new way to potentially treat alpha thalassemia: a stem-cell transplant given to the baby in utero.
    In utero stem-cell transplants had been tried before for the blood disorder but with limited success. Blood stem cells, which develop into all of the different types of blood cells, are extracted from a donor’s bone marrow, processed in a lab and injected directly into the umbilical vein connecting the fetus to the mother’s placenta. Ideally, the donor’s healthy stem cells then start dividing and take over for the fetus’ defective blood cells. But removing bone marrow can be risky in pregnant women, so past trials involving alpha thalassemia used stem cells from fathers, which were often rejected. This new trial challenged the ethical question: Was it worth the risk to the mother in order to possibly save the fetus? There was also a chance the transplant could harm Obar’s daughter more than it helped. But on the basis of new studies suggesting that a developing fetus would tolerate a mother’s transplanted cells better than a father’s, Dr. Tippi Mackenzie, a professor of surgery at UCSF and the leader of the study, believed it was worth a shot.
    Obar had concerns, but if the cells worked as they were expected to, it could give her daughter a chance at life, hopefully even a normal life free of her disease. She and Constantino decided to try it. Their daughter would be the first fetus in the world to receive stem cells from her mother in a carefully monitored clinical trial. 
    While blood stem cells from bone marrow have long been a cornerstone of treating blood cancers like leukemia and lymphoma, Mackenzie’s trial extracting the cells from a pregnant woman to treat a developing fetus in utero is just one of several innovative uses of stem cells to treat a growing list of diseases with cells instead of drugs. And promising studies are inching more of these stem-cell-based treatments closer to finally being tested in people. 
    With stem cells like those found in bone marrow, scientists are taking advantage of what the body does naturally: generate itself anew. Many of the adult body’s organs and tissues, including fat cells and blood, are equipped with their own stash of stem cells whose sole job is to regenerate cells and tissues when older ones are damaged or die off and which can be harvested for research and growth outside the body.
    Some organs are not endowed with these large stem-cell reservoirs, however, most notably the brain and heart muscle. So more than two decades ago, scientists found another source of these flexible cells–in embryos that were donated for research from in vitro fertilization clinics. They learned how to grow these cells in the lab into any cells in the body. That opened the possibility that conditions like heart disease, diabetes or even psychiatric disorders might eventually be treated by replacing damaged tissues or organs with healthy ones, which could provide cures and treatments that didn’t require drugs or surgery.
    But using cells obtained from human embryos raised serious ethical questions; because extracting the embryonic stem cells required terminating what some felt was a living human being, for years federal law prevented scientists from using government funds to conduct research on these cells.
    Beginning in 2006, scientists found a detour around this ethical roadblock. A Japanese team led by Shinya Yamanaka from Kyoto University showed it’s possible to take a skin cell from any person, erase its life history as a skin cell and return it to the clean slate it had in the embryo–turning it essentially into an embryonic stem cell without the morally complicated provenance. Called induced pluripotent stem (iPS) cells, these malleable cells can be coaxed in a lab dish, with the right cocktail of factors, into becoming heart muscle, brain nerves or insulin-pumping pancreatic cells. 
    In the quest to try these treatments on patients, there have been false starts. In 2009, the FDA approved the first embryonic-stem-cell clinical trial, which involved transplanting nerve cells made from stem cells into paralyzed people to restore the function of spinal nerves. In initial tests with mice, however, the transplanted cells started to form concerning clumps, which were not tumors but raised enough alarms about the safety of the therapy that the FDA put the study on hold; after resuming the trial, the company conducting the research eventually decided to stop it.
    Now, with more years of study and experience, scientists are preparing to test whether stem cells that transform into heart muscle could replace dead tissue after a heart attack, for example, or whether pancreatic cells that can’t produce enough insulin might be replaced with new cells that can do the job in people with Type 1 diabetes. Researchers even hope to one day treat brain disorders like Parkinson’s with new neurons made from stem cells that can replace the damaged motor nerves in the brain that lead to uncontrollable tremors.
    “With stem cells we can now get to the root cause of a disease and start looking for cures rather than [treatment] patches,” says Dr. Deepak Srivastava, director of the Roddenberry Stem Cell Center at the Gladstone Institutes and a professor at UCSF.
    Not only can stem cells lead to new treatments for diseases where they can replace ailing cells, but they can also provide a critical new way to study conditions that have remained black boxes because scientists simply didn’t have the luxury of studying live cells. Now labs across the country are incubating so-called mini-brains, made up of tens of thousands of brain cells grown from iPS cells, to serve as models for studying psychiatric disorders from autism to schizophrenia. Such knowledge could lead to new treatments in a field where therapies haven’t been as widely successful as doctors hoped.
    Putting the entire universe of stem-cell research together, from iPS cells to the new use of blood stem cells that Obar’s daughter received from her mother, Mackenzie says, “it’s an unbelievably exciting time to be in medicine, with all of these things exploding around us.”

    Wednesday is feeding day for Dr. Job de Jong’s 300 mini-brains. It takes de Jong, a postdoctoral fellow in the division of molecular therapeutics at Columbia University, a couple of hours to painstakingly suck out the few microliters of waste each ball of brain tissue has generated over the past week with a pipette, being careful not to disturb the cells themselves, and replace the fluid with a pinkish-orange liquid diet of growth factors, nutrients, glucose and protein-building amino acids.
    The cells, barely visible at the bottom of tiny wells in the neuropsychiatry lab’s version of an ice-cube tray, are somewhere between a poppy seed and a peppercorn in size. Made from iPS cells, they could provide the first window into understanding what goes wrong when psychiatric disorders strike.
    Treatment for psychiatric illnesses still lags behind advances in other diseases, mainly because it’s been nearly impossible to access the living brain for testing. As a substitute, neuroscientists relied on mouse brains, or slides of human brain tissue obtained after people with mental illnesses had died, as their primary source of data. Now Dr. Sander Markx, director of the precision-medicine initiative at Columbia who oversees de Jong’s work on the mini-brains, is hoping it could lead to a pioneering study in using stem cells to find and test new treatments for psychiatric disorders for the first time.
    So far, the mini-brains contain the same 20,000 genes that any human cell’s DNA contains and produce all of the relevant proteins that any brain cell would. (Because they lack all the structures of a whole brain, however, Markx and de Jong prefer to call them “organoids.”) The balls of brain tissue he is nurturing came from iPS cells generated from people in the Amish community. Some are from healthy people, others from those affected by a rare genetic brain disorder that involves autism-spectrum symptoms, intellectual disability and epileptic seizures. The stem cells were developed into the brain organoids to study how that genetic aberration affects normal brain development. “Now we have this opportunity to study the processes of how [brain cells] grow and develop and observe them in the lab,” de Jong says. He and his team are investigating how closely the organoids replicate actual disease processes in people and are hoping to eventually use the mini-brain cells to screen for promising drugs that may undo the effects of the mutation.
    Scientists are also making headway in regenerating tissues and parts of organs to simply replace ones affected by disease. People with cancer whose windpipes or urethras have been destroyed by tumors, for example, can grow new ones from their own cells, reducing the risk of rejection from a transplant.
    One early trial to treat macular degeneration, completed in 2014, is already showing promising results in patients. The trial involved growing embryonic stem cells obtained from IVF embryos into retinal pigment epithelial cells, the same cells that start to degrade in people with the disease, eventually robbing them of their sight. The cells were then introduced into the eyes of patients to replace their failing retinas. After nearly two years, more than half of the small number of people who were legally blind at the start of the study have reported some improvement in their vision.
    Treatments are also beginning to take advantage of iPS technology to make adult stem cells act as if they’re embryonic–sidestepping the ethical concerns that cling to the real kind.
    The process is especially critical for healing the human heart. Adult heart muscle no longer divides, or it divides so infrequently that when heart tissue is damaged–as in a heart attack–it doesn’t regenerate. Instead it turns into scar tissue, hampering the heart’s ability to pump blood. But Srivastava of the Gladstone Institutes has found that in a developing fetus, heart-muscle cells are actively dividing in order to form the heart, and he isolated four genes that are turned on during that period and then switched off at birth to stop heart cells from continuing to divide. Reactivating those genes in healthy adult heart cells made them divide again. And turning on embryonic genes even in scarred heart tissue converted those cells into new muscle as well. “We figured out what nature’s toolbox is for making the heart in the embryo,” he says, “and we redeployed the same cues in the adult to reprogram support cells to becoming new heart muscle.”
    Srivastava says the strategy may be useful not only for producing new heart muscle but for growing other types of cells too. In late 2016 he co-founded Tenaya Therapeutics to refine the technique, and the company is now preparing a treatment to test in patients.
    Such stem-cell-based biotech companies are popping up throughout the country to address different types of diseases. At Semma Therapeutics, based in Cambridge, Mass., Douglas Melton, a co-director of the Harvard Stem Cell Institute, is pursuing ways to generate a population of insulin-pumping pancreatic cells from people affected by Type 1 diabetes, like his two children. His latest studies showed that the cells, made from iPS cells, can detect and respond to changing levels of sugar and effectively dial up and down how much insulin they produce. But with Type 1 diabetes, replacing these cells with new ones from stem cells doesn’t solve the entire problem, since the immune system seems to be attacking the pancreatic cells. So he and his colleagues at Semma developed a way to protect the newly formed insulin-making pancreatic cells from destruction by encasing them in a membrane that can slip past the immune system. Melton hopes to test that delivery system, and his insulin-making cells made from stem cells, in the next two years. “Insulin was discovered in 1920, and I like the idea that at the 100-year mark we may be done injecting insulin,” he says. 
    As with any emerging technology, the opportunities that stem cells represent have also been shadowed by the potential for exploitation. A report published in the New England Journal of Medicine in 2017 described a study in which retinal cells created from stem cells extracted from patients’ own fat cells were transplanted to treat macular degeneration; it was shut down after three people in the trial were left with severe vision loss following the treatment. A review of the trial revealed that the volunteers paid the company running the study for the experimental treatment, which is unusual for clinical trials. The review also exposed irregularities in how the people were recruited and informed about the study, and raised questions about exactly what types of cells the people received.
    “Whatever we take forward to test clinically, we’d have to make sure the therapy we are using is safe,” says Srivastava. 
    Obar’s fears about being the first pregnant woman to use her own stem cells in a study to treat her baby’s alpha thalassemia in utero were quickly assuaged when she watched the blood transfusions take place. “I watched on a video the stem-cell machine and saw the white dots that were the stem cells swirling in the needle that was going into me,” she says. Her daughter received five transfusions via an injection into the umbilical vein through Obar’s abdomen. “I was just blown away by how it looked,” she says. “It was pretty cool.”
    For Obar, the possibility that the stem cells could become a permanent fix for her daughter’s condition was worth the risks of being the pioneer. And the procedure does seem to be working. Before the birth, Obar’s doctors warned her that her daughter might look blue when she took her first breaths and that she might seem weaker than other newborns. But not only did her daughter continue to survive the pregnancy, but she also let out a lusty cry when she was born that immediately put Obar’s mind at ease. Now 7 months old, the baby, whom they named Elianna, is eating well and working on rolling over. There’s still a chance she may show some developmental delays and cognitive effects from her condition in the future, but Obar and Constantino are hoping for the best.
    Mackenzie gives Elianna another blood transfusion once a month, just to be safe, and plans to continue monitoring her carefully for a year to look for signs that Obar’s blood cells are starting to populate her daughter. Depending on how well Elianna does, Mackenzie plans to enroll more expectant mothers whose babies are affected by the blood disorder in the study.
    The scientific impact of Mackenzie’s history-making stem-cell trial may be yet unknown, but the impact on the family is right there in the baby’s name. “I wanted a name to signify the fighter she is and what she went through,” says Obar. Throughout her pregnancy, none seemed quite right until she met the nurse who helped with her daughter’s first in utero blood transfusion. The nurse’s name was Elianna, which she learned means “God has answered.” “It’s perfect,” Obar says. 


    0 0





    Here’s something to digest: Scientists in Cincinnati have grown miniature versions of an esophagus, the organ responsible for guiding your food to your stomach. And in a first, they did it entirely using human stem cells.

    Called organoids, these tiny balls of lab-grown tissue resemble a real human esophagus, the researchers report today in the journal Cell Stem Cell. Previously, scientists succeeded in growing all sorts of organoids—stomachs, kidneys, brains, and even an esophagus made using mature patient tissue as the starting material. (Here’s how one team used a spinach leaf to create a mini beating heart.)

    These tiny organs-in-a-dish help scientists study how organs develop normally, and they’re used to figure out how these body parts go wrong, giving rise to cancer and other disorders.

    “Three-dimensional laboratory models of human esophagus are badly needed, especially since the mouse anatomy is fundamentally different to a human’s,” says Rebecca Fitzgerald, an esophageal cancer researcher at the University of Cambridge who wasn’t involved in the study.

    And since organoids act as a kind of stand-in for the real thing, they can also be used to test drugs to better predict how patients might respond to different treatments. (For instance, artificial wombs may help with premature births.)

    “Because they grow in a petri dish, we can poke and prod them all we want,” says James Wells, senior author on the new study and chief scientific officer of the Cincinnati Children's Center for Stem Cell and Organoid Medicine.

    Follow the Recipe

    Wells and his colleagues started with induced pluripotent stem cells, a kind of “master” cell that has the ability to become any other cell in the body. To make them turn into specialized esophagus cells, investigators added a mixture of chemicals and proteins to the stem cells.

    “These act as cues or signals that help to guide those pluripotent stem cells into specifically forming esophageal tissues,” Wells says. “It’s like following a recipe.”

    One key step in this recipe was the gene Sox2 and its associated protein, which have been linked to esophageal conditions. The team found that this gene plays a central role in helping the esophagus develop in a human embryo. It took about two months to grow the tiny blobs—each about a millimeter wide—in the lab. (Other researchers have used human stem cells to grow sheep-human hybrids to help with organ regeneration.)

    Wells and his Cincinnati team are already growing a few organoids to help diagnose patients who have medical conditions that affect the esophagus, like congenital birth defects. It’s part of the hospital’s bigger effort to create personalized mini-organs from pediatric patients with gastrointestinal disorders.

    “So let’s just say, in the clinic they’ve done everything they can to figure out what’s wrong with the patient using all the standard clinical tests,” Wells explains.

    The patient gets put into a custom-made MRI machine, which renders a 3-D image of the child’s organs. That image is sent to a team of surgeons, who will try to figure out if the organs can be surgically repaired. Meanwhile, doctors take a tiny piece of tissue from the patient and send it off to Wells’ lab, which makes stem cells from the tissue sample and then grows the organoids. Being able to examine these mini-organs up close, outside of a patient, can lead to a diagnosis.

    Opening Up Possibilities

    In the future, Wells hopes to be able to grow organoids that could be transplanted back into patients born with unhealthy or missing esophagus tissue. He says this could also work in adults who have had parts of their esophagus removed due to cancer.

    “In the long term, we want to make tissue to help the surgeons reconstruct the esophagus in cases where there’s too much missing for the surgeon to correct,” Well says. But that’s likely several years away.

    Using stem cells as a starting material “may be a major plus, since some patients may lack healthy esophageal tissue from which to try to engineer a new esophagus,” says Paul Knoepfler, a stem cell biologist at the University of California, Davis, School of Medicine.

    It’s also possible that esophageal organoids made from stem cells rather than patient tissue may grow bigger or produce more types of cells that occur naturally in the esophagus, he says. One thing that was missing from the esophagus-in-a-dish, for instance: The open space where food and liquids would go, called the lumen.

    View at the original source

    0 0





    Summary: Researchers report a gut-brain neural circuit establishes the vagus nerve as an essential component of the brain system that regulates reward and motivation. 


    A novel gut-to-brain neural circuit establishes the vagus nerve as an essential component of the brain system that regulates reward and motivation, according to research conducted at the Icahn School of Medicine at Mount Sinai and published September 20 in the journal Cell. The study provides a concrete link between visceral organs and brain function, especially in regards to reward, and may help to inform novel targets for vagal stimulation therapy, particularly for eating and emotional disorders.

    Previous research established the gut as a major regulator of motivational and emotional states but until now, the relevant gut-brain neuronal circuitry remained elusive. The vagus nerve, the longest of the cranial nerves, contains motor and sensory fibers and passes through the neck and thorax to the abdomen. Traditionally, scientists believed that the nerve exclusively mediated suppressive functions such as fullness and nausea; in contrast, circulating hormones, rather than vagal transmission, were thought to convey reward signals from the gut to the brain.

    “Our study reveals, for the first time, the existence of a neuronal population of ‘reward neurons’ amid the sensory cells of the right branch of the vagus nerve,” says Ivan de Araujo, DPhil, Senior Faculty in the Department of Neuroscience at the Icahn School of Medicine at Mount Sinai and senior author of the paper. “We focused on challenging the traditional view that the vagus nerve is unrelated to motivation and pleasure and we found that stimulation of the nerve, specifically its upper gut branch, is sufficient to strongly excite reward neurons lying deep inside the brain.”

    The branches of the vagus nerve are intricately intermingled, making it extremely difficult to manipulate each organ separately. To address this challenge, the research team employed a combination of virally delivered molecular tools that allowed them to exclusively target the vagal sensory neurons connected to the stomach and upper intestine.

    Specifically, researchers combined different viruses carrying molecular tools in a way that allowed them to optically activate vagal neurons connected to the gut while vagal neurons leading to other organs remained mute. The approach, a state-of-the-art technique known as “optogenetics,” allows investigators to use light to manipulate the activity of a prespecified set of neurons. 

    The study revealed that the newly identified reward neurons of the right vagus nerve operate under the same constraints attributed to reward neurons of the central nervous system, meaning they link peripheral sensory cells to the previously mapped populations of reward neurons in the brain. Strikingly, neurons of the left vagus were associated with satiety, but not with reward. The research team’s anatomical studies also revealed, for the first time, that the right and left vagal branches ascend asymmetrically into the central nervous system.

    “We were surprised to learn that only the right vagal branch eventually contacts the dopamine-containing reward neurons in the brainstem,” explained Wenfei Han, MD, PhD, Assistant Professor of Neuroscience at the Icahn School of Medicine at Mount Sinai and lead author of the study. Dopamine is a neural transmitter known to be essential for reward and motivation. 

    The uncovering of right gastrointestinal vagal neurons as conveyors of reward signals to the brain opens opportunities for novel, more specific stimulation targets that may increase the efficacy of vagal nerve stimulation therapy, a treatment that involves delivering electrical impulses to the vagus nerve, for patients suffering from emotional and eating disorders. 






    0 0


    Leaders who understand how brains work can make themselves and their teams more nimble, innovative, and resilient.






    Kevin Chin wants his executives to limber up their brains.

    Chin's investment company, Arowana, based in Sydney, Australia, is expanding into London, Los Angeles, and Asia, and "it is imperative to have a senior leadership team that is mentally agile and resilient," says Chin. Last year, the entrepreneur began working with Tara Swart, a neuroscientist, executive coach, and lecturer at MIT's Sloan School of Management. Now, he is extending that coaching to his top decision-makers so they, too, can get in touch with their amygdala.

    Interest in applying neuroscience to business has been mounting for decades. One reason, according to Swart, is that leaders prefer the idea of optimizing an organ--which is tangible--to the idea of optimizing behavior--which is not. "If I say, 'You need to be more emotionally intelligent,' I have had people respond, 'I don't understand what I'm supposed to do,'" she says. "If I tell them, 'You can build a pathway in your brain that will make it easier for you,' then many are more willing to embark on that process."

    Optimized thinking requires a healthy brain, and so part of Swart's advice falls into the familiar sleep-eat-hydrate-and-exercise domain. Disturbed sleep is particularly damaging. Your IQ can take a hit of 5 percent or more after a bad night. (Swart began working with Chin to combat the debilitating effects of jet lag on his sleep and, consequently, his thinking.)

    A well-fed, rested, and oxygenated brain is necessary for mental resilience and peak performance amid stress and uncertainty. "When all other things are equal, mental resilience is the factor that really distinguishes the CEO," says Swart. To improve resilience and performance, Swart recommends leaders work on the following:

    1. Neuroplasticity

    "Everything you have experienced in your life has molded and shaped your brain to favor certain behaviors and habits," says Swart. But those behaviors and habits may not be optimal. By focusing attention on and repeatedly practicing new, desirable behaviors, leaders can redirect their brains' chemical, hormonal, and physical resources to create new pathways. The old ones, meanwhile, wither from lack of use.

    Learning--particularly attention-heavy subjects like a language or a musical instrument--is the best way to enhance plasticity. "The fact that you are forced to attend to things that your brain hasn't experienced before has its own benefit apart from what you learn," says Swart. "The brain becomes more flexible, which [supports] things like being able to regulate your emotions, solve complex problems, and think more creatively."

    2. Brain agility

    To be nimble, you must think nimbly. Brain agility is the ability to switch seamlessly among different ways of thinking: from the logical to the intuitive to the creative. Agility may be particularly important for entrepreneurs. "The fact that the brain is likely to think in diverse ways or absorb diverse ideas means that you are more likely to spot trends, pivot, be ahead of the curve," says Swart.
    Multitaskers who try to use several modes of thinking at once generally do less well at all of them. Swart recommends working on problems consecutively and looking at them from different angles. Leaders can also leverage different thinking styles within their teams.

    3. Mindset mastery

    People with fixed mindsets believe traits like intelligence and talent are settled. People with growth mindsets see themselves as works in progress who develop their intelligence and talent through hard work. A fixed mindset leads to stagnation: a growth mindset to innovation and progress.
    Leaders with fixed mindsets should use neuroplasticity to try to move themselves toward growth, according to Swart. For entrepreneurs, that may not be a stretch. "It is about your appetite for risk and attitude toward failure, so it makes sense that entrepreneurs are more comfortable with this," she says.

    4. Simplicity

    A hyperactive world places impossible demands on limited brains. Stress rises. Decision-making suffers. Swart advises that leaders practice mindfulness--focusing on their bodies, breathing, and thoughts in the moment--as a way to reduce stress hormones and multiply folds in the part of the brain associated with executive function. She is also an advocate of reducing noncritical decisions. "Figure out what you are going to wear the night before or wear the same thing every day," she says.
    Leaders who know how to improve their own brain function can then apply those lessons to their companies. For example, by creating cross-functional work programs they help employees forge new neuro-pathways and develop brain flexibility as they master unfamiliar knowledge and skills.
    Leaders can also use their understanding of the brain to drive fear and stress out of the workplace and to develop trust. Stress spikes cortisol in the brain, which negatively affects thinking and the ability to control emotions. At sustained levels, people go into survival mode.

    By contrast, "if you are in a really exciting environment where you have got lots of the hormone oxytocin flowing around your organization, you are more likely to make decisions that are not based on scarcity and survival but on abundance," says Swart. Innovation and risk-taking flourish.

    View at the original source

    0 0




    Changing the clocks an hour ahead for daylight saving time doesn't just cost us sleep -- it might also be costing the American economy as much as $434 million, according to a new index.

    "The hour of sleep we lose each spring as part of daylight saving time has a broader collateral impact that this study has quantified in dollars and cents," Dan Schecter, who is the creator of SleepBetter.org, said in a statement. "While we may appreciate the extra hour of daylight that comes with moving our clocks ahead, this study provides a prudent reminder that it's a good idea to try to make up for that missing hour of sleep elsewhere."

    The new index, developed by Chmura Economics & Analytics, shows the financial toll of an hour's sleep lost, based on past research on heart attack incidence (published in the New England Journal of Medicine), workplace injury in mining and construction (published in the Journal of Applied Psychology) and cyberloafing (published in the Journal of Applied Psychology) all with regard to daylight saving time. There were 360 metro areas included in the index (not including Hawaii and Arizona, since they do not observe daylight saving).

    Researchers noted that the actual financial toll may even be higher, because the index did not include impacts on car accidents or injuries in other fields (like transportation or manufacturing). A past study in the New England Journal of Medicine showed an association between loss of sleep from DST and an increased car accident risk.

    Based on the criteria used to develop this index, Morgantown, West Virginia, experienced the worst losses from daylight saving -- losing $445,685. That breaks down to each of the town's 129,709 residents losing $3.40; the national average found in the study was $1.70 per person.

    This year, daylight save time begins on Sunday, March 10 at 2 a.m. That means people who live in areas affected by DST should move their clocks forward by an hour. While the purpose of daylight saving is supposedly to save energy, research has been mixed on whether it actually has any benefit in that area. National Geographic has a great overview of energy, with regard to daylight saving time.
    Researchers noted that the actual financial toll may even be higher, because the index did not include impacts on car accidents or injuries in other fields (like transportation or manufacturing). A past study in the New England Journal of Medicine showed an association between loss of sleep from DST and an increased car accident risk.

    Based on the criteria used to develop this index, Morgantown, West Virginia, experienced the worst losses from daylight saving -- losing $445,685. That breaks down to each of the town's 129,709 residents losing $3.40; the national average found in the study was $1.70 per person.

    This year, daylight save time begins on Sunday, March 10 at 2 a.m. That means people who live in areas affected by DST should move their clocks forward by an hour. While the purpose of daylight saving is supposedly to save energy, research has been mixed on whether it actually has any benefit in that area. National Geographic has a great overview of energy, with regard to daylight saving time.

    View at the original source 


    0 0







    In India’s patriarchal society, many more women are voting. Will their newfound clout reshape the country’s politics? 

    Are Indian women voting at higher rates than before?


    Women in India are casting their ballots more frequently, and in greater numbers. Today, women’s turnout has actually been higher than that of men in two-thirds of India’s state elections. This is a remarkable turn of events in a deeply patriarchal, conservative society.

    Why is this so surprising?


    Men in India have always turned out to vote in larger numbers than women, as far back as the data goes. As recently as 2004, men held an 8.4 percentage-point turnout advantage over women in national elections. But by 2014, that gap had shrunk to just 1.8 percentage points.



    Why is this happening?

    That remains something of a puzzle. There is likely a combination of push and pull factors at work: more women want to vote, and institutions are encouraging them more to go to the polls.
    Indian women are steadily becoming more literate, more educated, and wealthier. This could be making them more politically aware. They have also been taking part in collective organizing in unprecedented numbers, usually through small local groups in which women encourage one another to save money and pool their resources to pay for emergency needs. Some evidence suggests that when women take part in these economic networks, they are more likely to get involved in politics. These groups were not set up to achieve a political goal, but they may be having political consequences.
    State institutions have been trying to make voting easier for women as well. For instance, India’s Election Commission has been trying to encourage more women to vote by improving the safety of polling booths to reduce voter intimidation and by setting up separate queues for women on election day.

    Why does this shift matter particularly in India?

    For decades, voting in India has been a male-dominated enterprise. And because many more men have always turned out to vote, it is perhaps unsurprising that mainstream parties have never seen women as a vital constituency worth wooing.
    It seems plausible that some concerns that disproportionately impact women—such as public safety or healthcare—have often taken a backseat, because women have historically tended to be less politically engaged than men.

    Does this shift mean that more women are voting in India than men overall?

    No, it doesn’t. More women are voting than ever before, but their overall size as a voting bloc still lags behind men.
    This is because even though a higher percentage of female voters go to the polls, there is a significant gender imbalance in India’s general population. According to the country’s 2011 census, the country has approximately 943 women for every 1,000 men. This places India near the bottom at 186 out of 194 countries, according to the World Bank.
    The sex ratio among India’s registered voters is even worse. There are only 908 women for every 1,000 men on the country’s voter rolls.
    This means that, despite progress, Indian women suffer a double-blow at the polls: above and beyond an entrenched preference for sons (which results in illegal sex-selective abortion and fewer educational opportunities for girls than boys), women are less likely than men to be registered to vote.

    Is the fact that more women are voting already making a difference in terms of policy?

    It appears to be. As Indian women become more politically mobilized, this seems to be changing how Indian candidates campaign and how they govern once they are in office.
    For example, in the state of Bihar, Chief Minister Nitish Kumar brought in a draconian new anti-alcohol law in 2015 when he was reelected. Kumar said that this ban was motivated by a promise he had made on the campaign trail to women, who had told him that rampant alcoholism was devastating their families and communities.
    As India gears up for the 2019 general elections, women’s status has become a focal point of campaign rhetoric. Prime Minister Narendra Modi of the Bharatiya Janata Party (BJP) has made what are billed as female-friendly policies a hallmark of his platform.
    Modi regularly says that his administration’s efforts to improve sanitation, improve welfare subsidies (for things like the education of girls and the provision of clean cooking fuel) , and guarantee universal banking are transformative for India’s women. This is because the burdens of inadequate service provision disproportionately fall on women. Women are more likely to shoulder daily household tasks such as gathering water for everyday use, or caring for sick children. Opposition parties have adopted this framing as well, to counter Modi’s claims.

    How many more women are running as candidates compared to before?

    The number of female candidates has gone up, but there is a long way to go.
    In 1962, the first general election for which there is data on gender, a paltry 3.7 percent of candidates were women. This figure stayed roughly the same until 1991. In the 1990s, the proportion of women running for office began to rise, thanks to greater pressure on political parties to choose female candidates.
    In 2014, just over 8 percent of candidates in parliamentary races were women. This is, at once, a big improvement and yet a tiny proportion in absolute terms.
    According to data collected by Francesca Jensenius on Indian state elections, things are no better at the state level. Between 2011 and 2015, just 7.3 percent of candidates for state office were women.

    What kinds of political office are Indian women running for?

    Research has found that Indian women are more likely to run in two types of constituencies. First, female candidates are more common in constituencies that have relatively more men than women. This is startling, because one might expect women to run where they are better represented in the population. But this does not appear to be the case.
    Second, women are more likely to run in constituencies reserved for historically disadvantaged groups. India has one of the most advanced systems of affirmative action found anywhere in the world. Roughly one-quarter of legislative seats at the state and national levels are reserved for two protected groups: Scheduled Castes, who are at the bottom of the Hindu caste hierarchy, and Scheduled Tribes, who make up India’s native, indigenous population.



    Has the #MeToo movement tangibly shaken up Indian politics, or has it been more of a trending topic on social media?

    The #MeToo movement has taken a long time to catch fire in India. But in the past month, the #MeToo hashtag exploded on social media, as Indian women began talking about how they have suffered verbal and physical harassment in and out of the workplace.



    These accusations have affected the worlds of film, media, business, and even politics. For example, former minister of state for external affairs M.J. Akbar was accused of wrongdoing by more than twenty-one women and was forced to resign over the allegations. Although Akbar still retains his parliamentary seat and has vowed to fight the charges, his resignation from his position as minister was a major victory for the movement. Many more skeletons may tumble out of politicians’ closets in the weeks and months to come.

    Will having more female voters, candidates, and politicians really change deeply held Indian cultural attitudes toward women’s role in society?

    There is some evidence that having more women in office does change gender perceptions.
    For example, one study of affirmative action for women in Indian local politics has found that, while having female politicians does not necessarily mean voters are less likely to prefer male leaders, it does help reduce stereotypes about gender roles in public and private life.
    Another study has found that having female politicians can be extremely beneficial for the aspirations and educational attainment of girls living under their jurisdiction. Other new research has found that female representatives are linked to higher growth, as well as less corruption and political opportunism.

    Are Indian women more likely to vote for female candidates?

    No. There is little evidence to suggest that women in India are more likely to vote for female candidates. Parties led by prominent women also do not fare better than other parties among female voters, though this may be changing.
    One study has found that providing additional security at polling booths brings out more women to vote—but the vote share for female candidates actually goes down as a result. One possible explanation is that many Indian women, like men, often have ingrained biases against female leadership.

     As Indian women vote more frequently, are they also joining the workforce in greater numbers?

     No, they are not. This is a central paradox of India’s political and economic future. Despite enjoying unprecedented social and political empowerment, Indian women are dropping out of the labor force at a rapid clip.
    There is a well-known U-shaped statistical relationship between a given country’s per capita income and female participation in the labor force. In very poor and in very rich countries, women tend to participate in the workforce at higher rates.



    But in the middle of the income distribution, women’s participation dips, as households reach a certain threshold of per capita income that encourages (or forces) women to stay at home.
    Relative to countries at similar income levels, India’s female labor participation rate is a distinct outlier.
    What’s more, the rate of Indian women working outside the home has gone from bad to worse. According to the International Labor Organization, India ranks 121 out of 131 countries in female labor force participation. Just over a quarter of women in the country are in the labor force today, compared to more than a third in 2005.




    0 0







    • The UN ‘Oscar for the Best Policy’ was awarded to the northeastern Indian state of Sikkim.
    • This makes Sikkim the 1st fully organic state in the world with its sustainable agricultural policies that are inclusive and comprehensive of all socioeconomic aspects.
    • Pawan Kumar Chamling, the state’s Chief Minister, propagated ‘building an organic world together’.
    It’s not news that the north-eastern state of Sikkim is a leader in sustainable policies. But, the state’s efforts were finally recognised by the United Nations (UN) garnering the award for having the world’s best food policies or what’s called the ‘Oscar for the Best Policy’. 

    The world has its first fully organic state - and it’s in India.
    In actuality, Sikkim was already fully organic when 2015 rolled in having started its journey to being sustainable back in 2003. 

    According to Maria Helena Semedo, Food and Agriculture Organisation’s (FAO) Deputy Director, Sikkim’s policies have benefited over 66,000 family farmers. Rather than focusing on food security in isolation, the state’s food policies take socio-economic aspects like tourism, consumption, markets and development into its fold to form a comprehensive and inclusive approach toward agriculture. 

    Sikkim’s organic world 

    Unlike most regions that turn to sustainability and being organic as a solution once their natural capital in jeopardy, Sikkim’s organic policy came into being because of its treacherous terrain that makes normal farming methods obsolete. 

    Going organic was an opportunity utilise land that was currently lying unused. 

    Now Sikkim is home to bio-villages that employ effective microorganisms (EM) technology for compost and bio-pesticides. The agricultural fields use vermiculture hatcheries and compost-cum-urine pits for manure production. 

    Even the seeds they use are organic, rather than the hybrids that are used in other parts of the country. With the ‘Seed Village Scheme’, Sikkim ensures that it has locally adapted high-quality seeds. And, in order to ensure that those seeds yield the maximum output, the state conducts regular soil health assessments. 

    The transformation didn’t happen overnight. And the most important aspect of its success is probably the invention programme that was implemented to raise awareness about organic farming practices among farmers. 

    That being said, it’s easier said than done. When Sikkim started on its journey to becoming an organic state in 2003, chemical lobbyists and opposition parties didn’t make it easy. Chamling asserted that it was through strong political commitment and hard work that Sikkim is where it is now. 

    In fact, Sikkim’s story is a model example of sustainability that is currently being emulated in other northeastern states of the country and Kerala, who recently faced extensively floods as a result of unsustainable development amplified by climate change. 

    Sikkim beat out 51 nominations from 25 countries from all over the world at the event organised by the FAO and the World Future Council (WFC). The state’s Chief Minister, Pawan Kumar Chamling, even stated, “Let us build an organic world together,” when accepting the award. 

    View at the original source

    0 0



    Image credit : Shyam's Imagination Library



    From a cognitive perspective, aging is typically associated with decline. As we age, it may get harder to remember names and dates, and it may take us longer to come up with the right answer to a question.
    But the news isn’t all bad when it comes to cognitive aging, according to a set of three articles in the July 2014 issue of Perspectives in Psychological Science.
    Plumbing the depths of the available scientific literature, the authors of the three articles show how several factors — including motivation and crystallized knowledge — can play important roles in supporting and maintaining cognitive function in the decades past middle age.

    Motivation Matters
    Lab data offer evidence of age-related declines in cognitive function, but many older adults appear to function quite well in their everyday lives. Psychological scientist Thomas Hess of North Carolina State University sets forth a motivational framework of “selective engagement” to explain this apparent contradiction.
    If the cognitive cost of engaging in difficult tasks increases as we age, older adults may be less motivated to expend limited cognitive resources on difficult tasks or on tasks that are not personally relevant to them. This selectivity, Hess argues, may allow older adults to improve performance on the tasks they do choose to engage in, thereby helping to account for inconsistencies between lab-based and real-world data.
    Prior Knowledge Brings Both Costs and Benefits
    Episodic memory – memory for the events of our day-to-day lives – seems to decline with age, while memory for general knowledge does not. Researchers Sharda Umanath and Elizabeth Marsh of Duke University review evidence suggesting that older adults use prior knowledge to fill in gaps caused by failures of episodic memory, in ways that can both hurt and help overall cognitive performance. While reliance on prior knowledge can make it difficult to inhibit past information when learning new information, it can also make older adults more resistant to learning new erroneous information.
    According to Umanath and Marsh, future research should focus on better understanding this compensatory mechanism and whether it can be harnessed in developing cognitive interventions and tools.
    Older Adults Aren’t Necessarily Besieged By Fraud
    Popular writers and academics alike often argue that older adults, due to certain cognitive differences, are especially susceptible to consumer fraud. Psychological scientists Michael Ross, Igor Grossmann, and Emily Schryer of the University of Waterloo in Canada review the available data to examine whether incidences of consumer fraud are actually higher among older adults. While there isn’t much research that directly answers this question, the research that does exist suggests that older adults may be less frequent victims than other age groups.
    Ross, Grossmann, and Schryer find no evidence that older adults are actually more vulnerable to fraud, and they argue that anti-fraud policies should be aimed at protecting consumers of all ages.