Quantcast
Channel: career
Viewing all 1643 articles
Browse latest View live

Effects of varying amounts of carbohydrate on metabolism after weight loss 12-05

$
0
0


Losing weight is hard work, but many people who have lost weight may agree that keeping the weight off can be an even greater challenge. A lack of self-control or a few too many dietary indulgences are often cited as reasons for regaining weight. But a new study in the November issue of BMJ questions this conventional view, finding that the type of calories you consume may influence how likely you are to keep that weight off for the long term. [1]
The human body is designed to protect itself when it sheds weight, whether voluntarily or involuntarily, by causing an increased urge to eat and a slowdown in metabolism while more efficiently storing fat. Although it may be exciting to see pounds on the scale drop, the ability to keep losing weight or even maintain any weight loss becomes harder, because cravings to eat rise while the body more readily stores those calories as fat.
The purpose of the BMJ study was to see if different levels of carbohydrate in the diet could prevent these metabolic changes from occurring, so that any weight lost might stay off. The focus on carbohydrates was based on the “carbohydrate-insulin model” of obesity, which states that high insulin levels that result from eating a high glycemic load diet (i.e., highly processed carbohydrates like refined breads, crackers, cookies, and sugars) causes energy from the food to be stored more easily as fat, and may increase hunger and food cravings, lower energy expenditure, and promote weight gain.

The study

Participants were first placed on a weight-reducing diet to lose about 12% of their starting weight (weight loss averaged 25 pounds) to kickstart metabolic changes. The next phase randomly assigned the 164 p
  1. high (60%) carbohydrate and low (20%) fat diet
  2. moderate (40%) carbohydrate and (40%) fat diet
  3. low (20%) carbohydrate and high (60%) fat diet
The protein amount was the same in all groups at 20%. Total calories were adjusted up or down to prevent any weight changes in each participant. All meals were provided to the participants during the weight loss phase and throughout the 20-week test phase. The types of foods in each diet group were designed to be as similar as possible, but varying in amounts: the high carbohydrate group ate more whole grains, fruits, legumes, and low fat dairy products. In contrast, the low carbohydrate group ate more fat but eliminated all grains and some fruits and legumes.
Participants followed the diets for 20 weeks and total energy expenditure was measured. During the 20 weeks, the participants in all groups maintained their weight and there was minimal difference in secondary measures including physical activity and resting energy expenditure (factors that could independently increase total energy expenditure).

The findings

  • The low carbohydrate group showed an increased energy expenditure with a range of 209-278 calories/day compared with the high carbohydrate group. The moderate carbohydrate group showed a smaller increase in expenditure of about 100 calories compared with the high carbohydrate group. This trend was consistent throughout the 20-week period.
  • The increased metabolic effect with the low carbohydrate diet was most significant in people who had high insulin secretion at the start of the study, with an increased energy expenditure of a range of 308-478 calories/day. (People with high insulin secretion tend to be shaped more like “apples” than “pears,” as excess body fat is stored predominantly around the mid-section.) This finding supports recent research to suggest that differences in biology may affect how people respond to weight loss diets over the long term.
A hormone that works to increase appetite, ghrelin, decreased significantly on the low carbohydrate diet, which could help with weight loss maintenance. Another appetite-regulating hormone, leptin, also decreased. Leptin regulates energy balance and works to keep body weight stable. It typically counteracts ghrelin by sending signals to the brain to suppress appetite when the body has enough food. Previously, high leptin levels were thought to lower one’s appetite and cause the body to begin using stored fat for energy. However, some forms of obesity/overweight may lead to “leptin resistance” when the body has high levels of leptin. In this scenario, the brain does not receive an alert that leptin levels are already high, so it continues to send strong hunger signals while conserving body fat stores. In other words, high leptin levels may promote leptin resistance. Its significance in the BMJ study was that the lower carbohydrate diet appeared to improve leptin sensitivity by reducing high levels of leptin.
“This study raises the possibility that a focus on restricting carbohydrates, rather than calories, may work better for long-term weight control,” said Dr. David Ludwig, professor in the Department of Nutrition at the Harvard T.H. Chan School of Public Health, who led the study with Dr. Cara Ebbeling from Boston Children’s Hospital.
Dr. Walter Willett, professor of epidemiology and nutrition at the Harvard Chan School, who was not involved in the study, also noted that  “these findings from a carefully conducted investigation can help explain why low fat/high carbohydrate diets are not successful for most people and have failed to maintain weight loss in formal randomized trials that have lasted for one year or longer.”

Related

In a review featured in Science magazine the same week as the BMJ study, Dr. Ludwig discussed the controversy over specific fat-to-carbohydrate ratios in maintaining a healthy weight and lowering disease risk. [2] He, Dr. Willett, and other experts on the subject agreed that by focusing mainly on diet quality—replacing saturated or trans fats with unsaturated fats and replacing refined carbohydrates with whole grains and nonstarchy vegetables—most people can maintain good health within a broad range of fat-to-carbohydrate ratios. Read more at  Dietary fat is good? Dietary fat is bad? Coming to consensus.


View at the original source

The product management talent dilemma 12-06

$
0
0



Product management is one of the most critical talent pools for any company that is writing software but often does not get the right level of attention. Having a world-class product management function requires a multipronged approach under a holistic talent-management program. We recommend that this be a joint priority of the chief HR officer and the head of product. 
 Despite product managers’ central roles in software organizations, they are often neglected from a talent-management perspective. Four levers can address this industry-wide challenge.

Product management remains one of the most critical roles for any company for which software is a core growth driver. Amid the growing importance of data in decision making, an increased customer and design focus, and the evolution of software-development methodologies, the role of the product manager has evolved to influence every aspect of making a product successful. As a result, CEOs and technology leaders often identify the role of product manager as one of their top talent priorities. Paradoxically, results from the McKinsey Product Management Index reveal that companies are underinvesting in this crucial talent pool.

The McKinsey Product Management Index is a survey of product managers at leading software companies to understand the capabilities and enablers that create top-performing product managers (Exhibit 1). This research surfaced systemic gaps around software-talent management; in fact, fewer than half of the product managers feel prepared to play the roles expected of them or grow into future product leaders.


Drones: A predominant technological innovation in Indian construction 12-13

$
0
0



 Technology plays a pivotal role in shaping Indian construction industry. One new-gen technology that is gaining interest in the construction industry is the usage of drones. It is poised to serve as an effective medium towards building smart cities in a cost-effective, faster and safer manner with optimum utilisation of skills and efforts. It is expected that in the next 10 years, the use of drones in construction will register a manifold growth and will play a leading role in futuristic buildings.

According to a recent industry report, India is one of the fastest growing markets for UAVs and by 2021, the Indian UAV market is expected to reach USD 885.7 million. The usage of drones in the construction industry, has seen a 239% growth year-over-year globally, higher than any other commercial sector. In India too, the growth is set to increase manifold. 

“In the present scenario, it is very crucial for India to realise the aerial revolution that can be brought about by the proper utilisation of drones which have emerged as a highly viable commercial tool globally. As a matter of fact, the most notable sector of economy being benefitted by drones is construction. There’s no denying on the fact that there is an unstoppable rise of commercial drones; a market set to be worth billions over the coming years,” said AV Antao, Chief Operating Officer, Synergy Property Development Services.
There are numerous safety and legal implications that one should be aware of, as well as the differences between the commercial and personal use of drones. Despite increasing adoption and regulation, there is still tremendous growth in the use of drones in construction. Drones can add significant value to a project throughout its lifecycle.
The areas where drone technology will benefit the construction process:
Improvised quality and thermal imaging
The quality of the scans and aerial imagery provided by the cameras mounted on top of the drones are undeniably superior. Drones equipped with high-resolution thermal cameras serve as an excellent investigative tool for a host of building-specific applications and energy efficiency audits, including roof insulation inspection. They can also graphically depict energy inefficiencies and identify wet insulations in the roof or elsewhere by displaying temperature variations within the building.

Drones integration with BIM
Drones have proved themselves to be an asset for data-driven approach and thus can effectively undertake numerous tasks to aid the BIM (Building Information Modeling) workflow. Giving an aerial perspective in the creation of the initial BIM, drones also provide scalable point cloud scanning and photography at different stages of construction.
Highly cost-effective in topographical surveys
A drone carrying out standard survey and inspection activities is undoubtedly a significant cost-effective approach, as it removes the need for any kind of monetary or physical efforts put in for the surveys. As an example, for a typical topographic survey, the use of a drone can help to reduce costs by approximately 50%.
Ensuring completion of projects
The use of drones completely eradicates the need of shutting down active work sites for the concerns of maintaining and inspecting work like pipeline or flare stacks, which can be carried out safely under UAV supervision. Drones also monitor site activity and provide a comprehensive overview of the site through land surveying. Therefore, it is highly efficient in reducing the timeframe for the process of construction leading to a speedy delivery of the projects.
Reducing risk and keeping people safe
Drones help in minimising the possibility of risk by being monitored to perform a task. The UAV technology thus helps in supporting the drive towards zero incidents onsite as it takes away some of the risks from the construction activities like the need for labourers to work at heights when inspecting assets like bridges. UAVs and drones are effective in conducting safer, faster and more accurate inspections instead of safety managers, who physically conduct the site walk-throughs to identify potential hazards.

ompact and intelligent results
Acute 3D software can dramatically enhance productivity by turning a simple series of digital photos taken with a smart camera. When these cameras are mounted on drones they give a 360-degree overview of the project into a 3D reality mesh model. The result is a compact, intelligent representation of the asset in its current operating context. This also eases out the task for engineers and designers allowing them to work on an up-to-date 3D model for their enhancement and maintenance plans.
“Drones are tools that will play a fundamental role in ensuring that the construction industry can deliver huge and complex projects with better finishing and on-time results,” concludes Antao.

Brain on Fire: Widespread Neuroinflammation Found in Chronic Fatigue Syndrome 12-17

$
0
0



Neuroinflammation, Fatigue and Pain Lab Stop

We were having a case of déjà vu as we drove around the surprisingly large campus. Getting into the NIH to see Avindra Nath had been a nightmare.  It turned out that the NIH would only allow the big van through one access point and we’d ended up mortifyingly late to our appointment. Now here we were in another big campus with what my partner felt were inadequate directions. 

 I thought Jarred and I had it going, though. He said he would meet us at the parking lot and let us through the gate, but there were lots of parking lots. Plus, because we couldn’t stop, we had to keep driving around the campus and hope we met up with him at the right parking lot at the right time. It did seem a little dicey but I was confident I’d figured it out.

My partner, though, wasn’t having it with the sketchy directions or the reliance on male directional genes.  She could see it happening – we were going to be late again.

“Men,” she said, “how do you ever get anything done?”
As it turned out, Jarred and I were in sync: we both showed up at the gate at about the same time and he led us into his surprisingly large facility. Once again, we forgot to take pictures, but our timing couldn’t have been better; Younger had just wrapped up one of the most exciting studies in memory.
But first a little history…
 Neuroinflammation – The Japanese Way
Researchers have thought for decades that neuroinflammation is probably present in chronic fatigue syndrome (ME/CFS), but it’s only recently that the technology has been able to pick up the lower levels of neuroinflammation believed present in diseases like ME/CFS and fibromyalgia. The Japanese were the first to take a crack at it. 
 They have long believed that inflammation produces central fatigue (fatigue emanating from the brain), which plays a major role in ME/CFS. In 2013, Watanabe proposed that inflammation in the brain was whacking the “facilitation system” which pops up when we are fatigued to boost signals from the motor cortex to keep our muscles moving. He also hypothesized that an inhibition system was turning up the fatigue in ME/CFS.
A 2016 study rounded the circle when it found evidence of reduced dopaminergic activity from a part of the brain (the basal ganglia) which activates the motor cortex. That fit in just fine with Miller’s results, which suggested that problems with the basal ganglia could be producing both the fatigue and the motor activity problems in ME/CFS.
The big breakthrough came in 2014 when the Japanese startled just about everyone with a PET scan study which found widespread neuroinflammation in the brains of ME/CFS patients. The study was small (n=19) but the findings appeared strong. 


The neuroinflammation was widespread but was highest in the areas of the brain (thalamus, amygdala, midbrain, hippocampus) that had shown up in ME/CFS before. Plus, the Japanese were able to link specific regions of inflammation to specific symptoms. Inflammation in the thalamus was associated with cognitive impairment, fatigue and pain; inflammation in the amygdala was associated with cognitive issues; and inflammation of the hippocampus was associated with depression.
Anthony Komaroff called the findings the most exciting in decades. The Japanese began a much larger (n=120) neuroinflammation study. This year they published a large number of papers on ME/CFS in the Japanese Journal, “Shinkei Kenkyu No Shinpo” (Brain and Nerve). One of the papers was specifically on neuroinflammation but the findings have not yet been published in English journals.

Neuroinflammation – The Younger Way

Jarred Younger – who runs the Neuroinflammation, Pain and Fatigue Lab at the University of Alabama at Birmingham has also long believed that neuroinflammation plays a major role in chronic fatigue syndrome (ME/CFS) and fibromyalgia (FM).
In 2015, he noted what a hot subject neuroinflammation had become.  Seven years ago, he said, there was almost nothing on the microglia at the pain conferences. Now they’re loaded with presentations on microglia.
These immune cells are sensitive to so many factors and can be triggered in so many ways that virtually any stressor, from an infection to toxins to psychological stress, can potentially trigger a state of microglial sensitization in the right individual. With their ability to produce dozens of different inflammatory mediators, Younger believes that the difference between ME/CFS and FM could simply come down to small differences in how the microglia are tweaked.
Both diseases could be triggered by high rates of immune activation which, over time, sensitizes the microglia to such an extent that they start pumping out inflammatory factors at the first sign of a stressor.





New Non-Invasive Technique

Younger had just finished up his ME/CFS brain thermometry study. He used a new, less invasive way of assessing the brain called magnetic resonance spectroscopic thermometry (MRSt). The technique, which aims to create a thermometer for the brain, uses a magnetic resonance imaging (MRI) scanner. While Younger was assessing the temperature of the brain, he was also examining its chemical makeup.
taking temperature of the brain
Using temperature as a measure of inflammation, Younger is taking the temperature of the brain.
My partner asked him how he glommed onto the heat mapping idea?  It turned out that Younger had been trying for quite some time to find a non-invasive way to assess neuroinflammation.  He needed a technique he could safely use again and again in his longitudinal (Good Day/Bad Day) studies.
None of the present techniques, however, fit the bill; they were all heavily invasive. The PET scan approach uses radiation to image the brain. Another approach using magnetized nano particles is supposed to be safe but it still requires putting little bits of metal into peoples’ brains…
After hitting several dead ends, he hypothesized that because inflammation produces temperature increases, he could try and create a heat map of the brain. Looking through the literature, he realized that thermometry was already being used in the brain to assess stroke and cancer patients. It turns out that the brain’s attempts to repair the damage from stroke and cancer results in huge temperature increases. The stroke and cancer researchers, though, were just focused on small areas of the brain.
Because Younger didn’t know exactly where in the brain to search in ME/CFS, that technique wouldn’t work for him. He had to develop a method that would produce a heat map and a chemical signature of the entire brain, and found a Florida researcher who developed a way to do that.
With this technique, it takes just 20 minutes in the machine to get an entire 3-D heat and chemistry map of an ME/CFS patient’s brain. After The Solve ME/CFS Initiative (SMCI) provided funding, he got to work and ultimately scanned the brains of 15 ME/CFS women and 15 age and sex matched healthy controls.

Widespread Neuroinflammation Found in ME/CFS Patients’ Brains

“The markers were truly elevated” Jarred Younger
It turned out that Younger’s brain-wide search technique was right on. Looking at single areas of the brain in ME/CFS patients would have produced misleading data. It turned out there was no single area or even a group of areas in the brain that were abnormal in ME/CFS: most of the brain was.
Younger found lactate – a product of anaerobic metabolism – widely distributed across the brains of people with ME/CFS. He opened a chart showing an amazing array of lactate-engorged brain regions. He picked out a few: the insula, hippocampus, thalamus, and putamen, which had particularly high levels. They were virtually the same regions the Japanese had found in their 2015 study. The fact that the temperature increases overlapped with the lactate increases provided further confidence that Younger had identified some key areas.
The cingulate cortex - which Younger called the "seat of suffering" was particularly inflamed
The cingulate cortex – which Younger called the “seat of suffering” was particularly inflamed.
The interior cingulate cortex, in particular, which Younger called “the seat of suffering” in the brain, showed up in spades. It’s associated with a lot of nasty symptoms (malaise, fatigue and pain) and it’s shown up in both ME/CFS and fibromyalgia studies in the past. The high choline signal in that region of the brain suggested that inflammation there was producing a pattern of destruction and replacement; i.e. quite a bit of damage – even possibly neuronal damage – was happening there.
Overall, the lactate levels weren’t as high as in other diseases – they were just consistently present. Younger didn’t expect to see really high levels; really high lactate levels would have meant irretrievably damaged neurons – the kind of neuronal damage seen in M.S., Parkinson’s and Alzheimer’s – the kind of neuronal damage that is really hard to reverse. The fact that Younger saw inflammation in ME/CFS but not neuron-destroying inflammation is good news indeed for people with ME/CFS.
It’s possible that some damage such as neuronal reprogramming and synaptic pruning could be occurring, but determining that would take an autopsy.  (Some groups are collecting ME/CFS brains at a couple of autopsies that have been done.)
Remarkably, the healthy controls didn’t show evidence of a single analyte such as lactate being elevated or a single area of the brain being heated up. It’s highly unusual to find zero evidence of an abnormality in the healthy controls. Usually the results of studies apply to groups, not individuals; some healthy controls typically will have findings that are similar to the MEC/CFS patients and vice-versa, but not here – the two groups were absolutely distinct.  Even though this was a small study, such black/white results strongly suggest that neuroinflammation of the brain is a key element of ME/CFS.

Lactate

Magnetic spectroscopy studies have found increased lactate in the ventricles of the brain in ME/CFS before but not in the brain itself.  Shungu’s spectroscopy studies have, in fact, produced some of the most consistent results in all of ME/CFS research. Three times he’s probed the ventricles and three times he’s found increased lactate.  Shungu, however, is examining an area just outside of the brain. His findings may indicate inflammation is present in the brain or it could be confined to the cerebral spinal fluid. 

Younger’s new approach looked at the entire brain and found signs of inflammation almost everywhere. When asked what could cause that, Younger said that any neurodegenerative/ neuroinflammatory disorder like MS or a severe brain injury that tweaks the microglia (immune cells in the brain) enough to produce a sustained period of inflammation, burns up the oxygen in the system. Once that happens, the cells resort to anaerobic metabolism and lactate builds up just as it does in the muscles during exercise.
My partner asked another intriguing question. (Thank god her brain was functioning.) What about intervention studies?  What about whacking ME/CFS patients with exercise and seeing what happens to their brains? Younger, it turned out, had already laid the groundwork for that study.

Therapeutic Implications

Documenting that neuroinflammation is present and is affecting functioning in ME/CFS could have dramatic treatment implications.  It could lead the scientific and medical communities to focus less on drugs that target the nervous system and more on ways to reduce inflammation. For example, attempts could be made to modify current anti-inflammatories so that they pass through the blood brain barrier (most do not). Health Rising will focus on some way that might happen in a future blog.

Fast Mover

Throughout this process, Younger has moved extremely quickly. He completed the thermometry study as quickly as possible, and then as the dramatic results began to come in, rapidly applied for a nice, fat ROI grant from the NIH. The results were so convincing, in fact, that he didn’t wait for them all to come in and applied for the grant using half the data from the study.
It’s hard to imagine that that grant application won’t get funded.  When it is, Younger will have plenty of money to pursue the neuroinflammation angle further, including challenging ME/CFS patients with exercise – something he’s never done before – and seeing what that does to the inflammation in their brain. It’ll be fascinating to see if it rises, how long the inflammation lasts, how it tracks with post-exertional symptoms, and where it’s most evident.

Cause?


Younger speculated that people with ME/CFS have an immune-triggered metabolic disorder.  The widespread neuroinflammation provides a clue, he thinks, to what’s going on. That pattern suggests that immune cells are breaching the blood-brain barrier in multiple areas; like a flood overwhelming a dike they’re essentially pouring through gaps across the brain. Why that may be happening he’s not sure, but his next step in ME/CFS is to demonstrate that that’s happening. How he proposes to do that is the subject of the next blog.  

 View at original source

When love and science double date 24-12

$
0
0







 Sure, your heart thumps, but let’s look at what’s happening physically and psychologically....
 

Love’s warm squishiness seems a thing far removed from the cold, hard reality of science. Yet the two do meet, whether in lab tests for surging hormones or in austere chambers where MRI scanners noisily thunk and peer into brains that ignite at glimpses of their soulmates.
When it comes to thinking deeply about love, poets, philosophers, and even high school boys gazing dreamily at girls two rows over have a significant head start on science. But the field is gamely racing to catch up.
One database of scientific publications turns up more than 6,600 pages of results in a search for the word “love.” The National Institutes of Health (NIH) is conducting 18 clinical trials on it (though, like love itself, NIH’s “love” can have layered meanings, including as an acronym for a study of Crohn’s disease). Though not normally considered an intestinal ailment, love is often described as an illness, and the smitten as lovesick. Comedian George Burns once described love as something like a backache: “It doesn’t show up on X-rays, but you know it’s there.”

Richard Schwartz, associate professor of psychiatry at Harvard Medical School (HMS) and a consultant to McLean and Massachusetts General (MGH) hospitals, says it’s never been proven that love makes you physically sick, though it does raise levels of cortisol, a stress hormone that has been shown to suppress immune function.
Love also turns on the neurotransmitter dopamine, which is known to stimulate the brain’s pleasure centers. Couple that with a drop in levels of serotonin — which adds a dash of obsession — and you have the crazy, pleasing, stupefied, urgent love of infatuation.
It’s also true, Schwartz said, that like the moon — a trigger of its own legendary form of madness — love has its phases.
“It’s fairly complex, and we only know a little about it,” Schwartz said. “There are different phases and moods of love. The early phase of love is quite different” from later phases.
During the first love-year, serotonin levels gradually return to normal, and the “stupid” and “obsessive” aspects of the condition moderate. That period is followed by increases in the hormone oxytocin, a neurotransmitter associated with a calmer, more mature form of love. The oxytocin helps cement bonds, raise immune function, and begin to confer the health benefits found in married couples, who tend to live longer, have fewer strokes and heart attacks, be less depressed, and have higher survival rates from major surgery and cancer.
Schwartz has built a career around studying the love, hate, indifference, and other emotions that mark our complex relationships. And, though science is learning more in the lab than ever before, he said he still has learned far more counseling couples. His wife and sometime collaborator, Jacqueline Olds, also an associate professor of psychiatry at HMS and a consultant to McLean and MGH, agrees.
Love also turns on the neurotransmitter dopamine, which is known to stimulate the brain’s pleasure centers. Couple that with a drop in levels of serotonin — which adds a dash of obsession — and you have the crazy, pleasing, stupefied, urgent love of infatuation.
It’s also true, Schwartz said, that like the moon — a trigger of its own legendary form of madness — love has its phases.
“It’s fairly complex, and we only know a little about it,” Schwartz said. “There are different phases and moods of love. The early phase of love is quite different” from later phases.
During the first love-year, serotonin levels gradually return to normal, and the “stupid” and “obsessive” aspects of the condition moderate. That period is followed by increases in the hormone oxytocin, a neurotransmitter associated with a calmer, more mature form of love. The oxytocin helps cement bonds, raise immune function, and begin to confer the health benefits found in married couples, who tend to live longer, have fewer strokes and heart attacks, be less depressed, and have higher survival rates from major surgery and cancer.
Schwartz has built a career around studying the love, hate, indifference, and other emotions that mark our complex relationships. And, though science is learning more in the lab than ever before, he said he still has learned far more counseling couples. His wife and sometime collaborator, Jacqueline Olds, also an associate professor of psychiatry at HMS and a consultant to McLean and MGH, agrees. 

Good genes are nice, but joy is better 12-27

$
0
0




 Harvard study, almost 80 years old, has proved that embracing community helps us live longer, and be happier...

Second in an occasional series on how Harvard researchers are tackling the problematic issues of aging.

When scientists began tracking the health of 268 Harvard sophomores in 1938 during the Great Depression, they hoped the longitudinal study would reveal clues to leading healthy and happy lives.

They got more than they wanted. 
 After following the surviving Crimson men for nearly 80 years as part of the Harvard Study of Adult Development, one of the world’s longest studies of adult life, researchers have collected a cornucopia of data on their physical and mental health.
Of the original Harvard cohort recruited as part of the Grant Study, only 19 are still alive, all in their mid-90s. Among the original recruits were eventual President John F. Kennedy and longtime Washington Post editor Ben Bradlee. (Women weren’t in the original study because the College was still all male.)
In addition, scientists eventually expanded their research to include the men’s offspring, who now number 1,300 and are in their 50s and 60s, to find out how early-life experiences affect health and aging over time. Some participants went on to become successful businessmen, doctors, lawyers, and others ended up as schizophrenics or alcoholics, but not on inevitable tracks.
During the intervening decades, the control groups have expanded. In the 1970s, 456 Boston inner-city residents were enlisted as part of the Glueck Study, and 40 of them are still alive. More than a decade ago, researchers began including wives in the Grant and Glueck studies.

Over the years, researchers have studied the participants’ health trajectories and their broader lives, including their triumphs and failures in careers and marriage, and the finding have produced startling lessons, and not only for the researchers.
“The surprising finding is that our relationships and how happy we are in our relationships has a powerful influence on our health,” said Robert Waldinger, director of the study, a psychiatrist at Massachusetts General Hospital and a professor of psychiatry at Harvard Medical School. “Taking care of your body is important, but tending to your relationships is a form of self-care too. That, I think, is the revelation.”
Close relationships, more than money or fame, are what keep people happy throughout their lives, the study revealed. Those ties protect people from life’s discontents, help to delay mental and physical decline, and are better predictors of long and happy lives than social class, IQ, or even genes. That finding proved true across the board among both the Harvard men and the inner-city participants.
The long-term research has received funding from private foundations, but has been financed largely by grants from the National Institutes of Health, first through the National Institute of Mental Health, and more recently through the National Institute on Aging.
Researchers who have pored through data, including vast medical records and hundreds of in-person interviews and questionnaires, found a strong correlation between men’s flourishing lives and their relationships with family, friends, and community. Several studies found that people’s level of satisfaction with their relationships at age 50 was a better predictor of physical health than their cholesterol levels were.
“When we gathered together everything we knew about them about at age 50, it wasn’t their middle-age cholesterol levels that predicted how they were going to grow old,” said Waldinger in a popular TED Talk. “It was how satisfied they were in their relationships. The people who were the most satisfied in their relationships at age 50 were the healthiest at age 80.”
He recorded his TED talk, titled “What Makes a Good Life? Lessons from the Longest Study on Happiness,” in 2015, and it has been viewed 13,000,000 times.
The researchers also found that marital satisfaction has a protective effect on people’s mental health. Part of a study found that people who had happy marriages in their 80s reported that their moods didn’t suffer even on the days when they had more physical pain. Those who had unhappy marriages felt both more emotional and physical pain.
Those who kept warm relationships got to live longer and happier, said Waldinger, and the loners often died earlier. “Loneliness kills,” he said. “It’s as powerful as smoking or alcoholism.”
According to the study, those who lived longer and enjoyed sound health avoided smoking and alcohol in excess. Researchers also found that those with strong social support experienced less mental deterioration as they aged.
In part of a recent study, researchers found that women who felt securely attached to their partners were less depressed and more happy in their relationships two-and-a-half years later, and also had better memory functions than those with frequent marital conflicts.



“Good relationships don’t just protect our bodies; they protect our brains,” said Waldinger in his TED talk. “And those good relationships, they don’t have to be smooth all the time. Some of our octogenarian couples could bicker with each other day in and day out, but as long as they felt that they could really count on the other when the going got tough, those arguments didn’t take a toll on their memories.”
Since aging starts at birth, people should start taking care of themselves at every stage of life, the researchers say.
“Aging is a continuous process,” Waldinger said. “You can see how people can start to differ in their health trajectory in their 30s, so that by taking good care of yourself early in life you can set yourself on a better course for aging. The best advice I can give is ‘Take care of your body as though you were going to need it for 100 years,’ because you might.”
The study, like its remaining original subjects, has had a long life, spanning four directors, whose tenures reflected their medical interests and views of the time.
Under the first director, Clark Heath, who stayed from 1938 until 1954, the study mirrored the era’s dominant view of genetics and biological determinism. Early researchers believed that physical constitution, intellectual ability, and personality traits determined adult development. They made detailed anthropometric measurements of skulls, brow bridges, and moles, wrote in-depth notes on the functioning of major organs, examined brain activity through electroencephalograms, and even analyzed the men’s handwriting.
Now, researchers draw men’s blood for DNA testing and put them into MRI scanners to examine organs and tissues in their bodies, procedures that would have sounded like science fiction back in 1938. In that sense, the study itself represents a history of the changes that life brings.
Psychiatrist George Vaillant, who joined the team as a researcher in 1966, led the study from 1972 until 2004. Trained as a psychoanalyst, Vaillant emphasized the role of relationships, and came to recognize the crucial role they played in people living long and pleasant lives.



In a book called “Aging Well,” Vaillant wrote that six factors predicted healthy aging for the Harvard men: physical activity, absence of alcohol abuse and smoking, having mature mechanisms to cope with life’s ups and downs, and enjoying both a healthy weight and a stable marriage. For the inner-city men, education was an additional factor. “The more education the inner city men obtained,” wrote Vaillant, “the more likely they were to stop smoking, eat sensibly, and use alcohol in moderation.”
Vaillant’s research highlighted the role of these protective factors in healthy aging. The more factors the subjects had in place, the better the odds they had for longer, happier lives.
“When the study began, nobody cared about empathy or attachment,” said Vaillant. “But the key to healthy aging is relationships, relationships, relationships.”
The study showed that the role of genetics and long-lived ancestors proved less important to longevity than the level of satisfaction with relationships in midlife, now recognized as a good predictor of healthy aging. The research also debunked the idea that people’s personalities “set like plaster” by age 30 and cannot be changed.
“Those who were clearly train wrecks when they were in their 20s or 25s turned out to be wonderful octogenarians,” he said. “On the other hand, alcoholism and major depression could take people who started life as stars and leave them at the end of their lives as train wrecks.”
The study’s fourth director, Waldinger has expanded research to the wives and children of the original men. That is the second-generation study, and Waldinger hopes to expand it into the third and fourth generations. “It will probably never be replicated,” he said of the lengthy research, adding that there is yet more to learn.
“We’re trying to see how people manage stress, whether their bodies are in a sort of chronic ‘fight or flight’ mode,” Waldinger said. “We want to find out how it is that a difficult childhood reaches across decades to break down the body in middle age and later.”
Lara Tang ’18, a human and evolutionary biology concentrator who recently joined the team as a research assistant, relishes the opportunity to help find some of those answers. She joined the effort after coming across Waldinger’s TED talk in one of her classes.
“That motivated me to do more research on adult development,” said Tang. “I want to see how childhood experiences affect developments of physical health, mental health, and happiness later in life.”
Asked what lessons he has learned from the study, Waldinger, who is a Zen priest, said he practices meditation daily and invests time and energy in his relationships, more than before.
“It’s easy to get isolated, to get caught up in work and not remembering, ‘Oh, I haven’t seen these friends in a long time,’ ” Waldinger said. “So I try to pay more attention to my relationships than I used to.”

Guiding Students to Apply What They Learn 03 - 24

$
0
0



 My school has been encouraging the use of project-based learning (PBL) for many years, but the math department—which I’m part of—was very slow to adopt it. I bought into myths about PBL—that it takes too long, that it’s hard to assess, etc.

I wasn’t opposed to all innovations: I flipped my classroom. I allowed my students to move at their own pace. I was the first in my school to adopt a competency-based approach in my class.

But while my students were mastering individual mathematical skills, they were missing the big picture. My assessments were well aligned to the practice work and the standards, but they rarely asked students to transfer and apply their knowledge. These assessments gave me clear information about a standard on its own but not about my students’ ability to problem-solve and think critically.

Why I Chose PBL


I needed an authentic assessment that would ask students to use their skills outside the context of my class, and it seemed that PBL would help my students transfer their knowledge. And I thought that PBL could challenge my students in a way that I had never challenged them before by requiring creativity, grit, and real problem-solving.

I knew that some of my students were also taking psychology and that statistics has countless applications in that field, so I approached the psychology teacher. We began by comparing our standards to see where we might collaborate. We found that one of the Common Core math standards, “Recognize the purposes of and differences among sample surveys, experiments, and observational studies; explain how randomization relates to each,” was similar to one of the psychology standards.

Next we broke down the standard to come up with a task. The students would need to design both a survey and an observational study that would answer research questions they would design themselves. They would need to explain how they used randomization to select samples.
We also needed to ensure that the task would be authentic. The psychology teacher had a connection at the local elementary school, so we decided that our students would design their research questions around things that could be observed there.
So we had our authentic task: “What would you like to learn about the state of elementary education today? What can you learn about the patterns and behaviors of elementary school students through observation and data analysis?”
Students spent two weeks planning for a day of observations at the elementary school, and they wrote survey questions. They had to gather analyzable data on variables of their own choosing, and then they had to report objective findings.

An unexpected connection between insulin receptor and gene expression opens new doors 04-10

$
0
0




 The discovery of insulin in the 1920s marked the breakthrough in the almost 3,500-year-long mystery of diabetes, a disease first described in ancient Egyptian papyruses.

Until its discovery, physicians struggled to explain how symptoms such as sugary urine, constant thirst and frequent urination could lead to ailments ranging from blindness and nerve damage to coma and death.

Over the past century, scientists have detailed the hormone’s central role as a regulator of blood sugar, mapped its cell-signaling pathways and established its involvement in diabetes and a staggering array of other chronic conditions, including neurodegeneration, cardiovascular disease and cancer.

Still, many aspects of insulin signaling remain unclear, particularly its long-term effects on cells, and there are currently no effective cures for the hundreds of millions of people around the world living with diabetes.

Now, researchers from Harvard Medical School have made key new insights into the molecular behavior of insulin. Reporting online in Cellon April 4, they describe an unexpected mechanism by which insulin triggers changes to the expression of thousands of genes throughout the genome.

Their analyses show that the insulin receptor—a protein complex at the cell surface—physically relocates to the cell nucleus after it detects and binds insulin. Once there, it helps initiate the expression of genes involved in insulin-related functions and diseases. This process was impaired in mice with insulin resistance.

The results outline a set of potential therapeutic targets for insulin-related diseases and establish a wide range of future avenues of research on insulin signaling, including potential clues toward the underlying biological mechanisms that differentiate type 1 and type 2 diabetes.

“Our findings open the door for a new field of study on the insulin receptor, a remarkable protein complex expressed in almost all cells and implicated in major chronic diseases that affect hundreds of millions of people,” said senior study author John Flanagan, professor of cell biology at HMS.
“Understanding the fundamental mechanisms of how cells work can help us design new drugs or improve existing ones, and the insulin receptor certainly has potential for tremendous returns on investment,” Flanagan added.
Produced by specialized cells in the pancreas, the hormone insulin serves as the main signal to cells to absorb glucose from the bloodstream and begin the production and metabolism of carbohydrates, fats and proteins. This process is essential for normal cell function, growth and nutrient storage.
Dysfunctions in insulin signaling give rise to a number of serious chronic diseases. In type 1 diabetes, pancreatic cells fail to produce enough insulin, and in type 2 diabetes—the far more common form of the condition—cells become resistant to insulin. Without proper insulin signaling, glucose accumulates in the blood where it damages tissues and organs. Insulin resistance has also been implicated in neurodegenerative diseases such as Alzheimer’s and Parkinson’s, and excessive insulin signaling contributes to a variety of cancers.
Strange bedfellows
Flanagan and colleagues were broadly interested in studying how cell surface receptors communicate with the interior of a cell and performed screens to identify proteins associated with the insulin receptor.
Their experiments suggested that one of the most prominent such proteins is RNA polymerase, an enzyme responsible for transcribing DNA into RNA—the first step in gene expression.
This was unexpected, said Flanagan, because RNA polymerase functions inside the nucleus of a cell—far away from surface of the cell where the insulin receptor is located. Additional analyses revealed an unexpected explanation.
The team found that after the insulin receptor binds insulin, it moves from the cell surface to the nucleus via a yet unidentified mechanism. Once there, it binds to RNA polymerase on chromatin—the protein-DNA complex that cells use to store their genomes.
A genome-wide search revealed around 4,000 genomic regions where the insulin receptor bound with a degree of specificity that essentially makes random chance impossible. The striking majority of these sites were at promoters—sequences of DNA that initiate the expression of genes.
A high proportion of targeted genes were involved in insulin-related functions, particularly the synthesis and storage of lipids and proteins. Certain subsets of genes appeared to be unique to different tissue types. The analyses also identified numerous disease-related genes, including ones linked with diabetes, cancer and neurodegeneration.
Lipid paradox
Counterintuitively, the researchers found the insulin receptor does not specifically target genes involved in carbohydrate metabolism—one of the primary functions of insulin signaling.
This was an intriguing result for many reasons, Flanagan said, particularly because of the observed differences between the two major forms of diabetes. Both types involve problems with carbohydrate synthesis and storage. However, if left untreated, patients with type 1 diabetes lose weight, while type 2 diabetes is associated with obesity.
“The excessive lipid storage seen in type 2 diabetes compared with type 1 is a bit of a paradox because disrupted insulin signaling should cause issues with both lipid synthesis and storage in either condition,” he said.
“The finding that genes downstream of the pathway we identified are involved in lipid metabolism but not carbohydrate metabolism potentially gives us a window into that differential effect between carbohydrate and lipid,” Flanagan said. “But we won’t know until we perform further experiments.” 
New paths
The researchers made a number of other insights on how the insulin receptor regulates genes.
They identified several additional proteins that play a role in this process. One of particular interest was HCF-1 (host cell factor-1), which is expressed in all cells and is involved in regulating cell cycle and growth. It appears to play a critical role in recruiting the insulin receptor and other proteins to the location of a promoter to initiate gene activation.
The team also studied the effects of insulin resistance on this pathway. Giving mice glucose to trigger a rise in blood insulin led to an increase in insulin receptor-chromatin binding. Mice with insulin-resistance, however, showed 30-fold reduction in receptor-chromatin binding, an observation that suggests a high degree of sensitivity to insulin resistance.
While the insulin receptor has been studied intensely for decades, these findings represent a new pathway for insulin signaling function and shed light on potential mechanisms for the long-term effects of insulin in the body.
Intriguingly, as far back as the 1970s, scientists had clues that the insulin receptor and other members of the same class of cell surface receptors, known as receptor tyrosine kinases, can be found within the cell nucleus. These observations remained poorly understood, and the process behind them never fully described.
The identification of this pathway opens new avenues of investigation into the insulin receptor and other receptor tyrosine kinases, which function as key “on” or “off” switches for a wide range of important cellular processes.
“We were surprised to find such strong evidence that the entire insulin receptor complex moves to the nucleus, and we were initially very skeptical,” Flanagan said. “We still don’t know how exactly this happens, but we’ve pinned down the details of much of this process on a genome-wide scale.”
“A better understanding will help us improve our knowledge of the biology of insulin signaling in health and in disease, as well as other receptor tyrosine kinases, which are attractive targets for drug therapies due to their involvement in such a broad range of diseases,” Flanagan added.


AI analyzes language to predict schizophrenia 06-21

$
0
0


AI analyzes language to predict schizophrenia.....

A machine learning method found out a hidden clue in people’s language that can predict psychosis episodes. 


A machine learning method uncovered a hidden clue in people’s language predictive of the later manifestation of psychosis: the frequent use of words associated with sound. A paper published by the journal npj Schizophrenia released the findings by scientists from Emory University and Harvard University.

Hidden details

The researchers developed a new machine-learning methodology to more precisely quantify the semantic richness of people’s conversational language (a known indicator for psychosis). Their results indicated that automated analysis of the two language variables (more frequent use of words associated with sound and speaking with low semantic density, or vagueness) can predict if an at-risk person will later develop psychosis with an impressive 93 percent accuracy.
Trained clinicians had not noticed how individuals at risk for psychosis use more words associated with sound than the average population, though abnormal auditory perception is a pre-clinical symptom.
“Voices: Living with Schizophrenia” by WebMD, YouTube.
Machine learning can spot patterns in people’s use of language that even doctors who have undergone training to diagnose and treat those at risk of psychosis may not notice. “Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes,” says first study author Neguine Rezaii, a fellow in the Department of Neurology at Harvard Medical School. That being said, it is possible to use machine learning to find subtle patterns hiding in people’s language. “It’s like a microscope for warning signs of psychosis,” she adds. Rezaii started working on the study while she was a resident in the Department of Psychiatry and Behavioral Sciences at Emory University School of Medicine.
“Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes,” Neguine Rezaii, fellow in the Department of Neurology at Harvard Medical School.

Behind the data

Researchers first used machine learning to establish “norms” for conversational language. They fed a computer software program the online
conversations of 30,000 users of Reddit, a popular social media platform where people have informal discussions about a wide array of sujects. The software program, known as Word2Vec, utilizes an algorithm to change individual words to vectors, assigning each one a location in a semantic space based on its meaning. Such with similar meanings are positioned closer together than those with different meanings.
They also developed a computer program to perform “vector unpacking,” or analysis of the semantic density of word usage. Previous work has measured semantic coherence between sentences. Vector unpacking enabled the researchers to quantify how much information was packed into each sentence. After generating a baseline of “normal” data, the researchers applied the same techniques to diagnostic interviews of 40 participants that had been conducted by trained clinicians, as part of the multi-site North American Prodrome Longitudinal Study (NAPLS), funded by the National Institutes of Health. 
Vector unpacking enabled the researchers to quantify how much information was packed into each sentence.
The automated analyses of the participant samples were then compared to the normal baseline sample and the longitudinal data on whether the participants converted to psychosis.
"This research is interesting not just for its potential to reveal more about mental illness, but for understanding how the mind works” concludes senior author Phillip Wolff, a professor of psychology at Emory.

Bill that seeks to lift green card cap amended to protect US 06-21

$
0
0





Eliminating the country quota from the most sought-after Green Cards will end the current discrimination in the US labour market, but would allow countries like India and China to dominate the path to American citizenship, according to the latest Congressional report.
Having a Green Card allows a person to live and work permanently in the United States.
Indian-Americans, most of whom are highly skilled and come to the US mainly on the H-1B work visas, are the worst sufferers of the current immigration system which imposes a seven per cent per country quota on allotment of Green Cards or the Legal Permanent Residency (LPR).


The bipartisan Congressional Research Service (CRS), an independent research wing of Congress, said if the per-country cap for employment-based immigrants was removed, many expects that Indian and Chinese nationals would dominate the flow of new employment-based LPRs for as many years as needed to clear out the accumulated queue of prospective immigrants from those countries.
This queue would include those with approved employment-based immigrant petitions waiting to file either a visa  .. 


The CRS regularly prepares reports on various issues for the lawmakers to take informed decisions.

A copy of the report 'Permanent Employment-Based Immigration and the Per-country Ceiling' dated December 21 was made available to PTI, ahead of the new Congress beginning January 3, wherein several lawmakers are planning to introduce a legislation to eliminate per-country quota for issuing Green Cards to foreign nationals. 

As of April 2018, a total of 306,601 Indian nationals – mostly IT professionals – were waiting in line for Green Cards, according to the USCIS figures. 

Indians constitute 78 per cent of the 395,025 foreign nationals waiting for Green Cards in just one category of employment-based LPR applications.

Due to the cap, the current wait period for the majority of Indians to get a Green Card is nine and half years, the CRS said, adding this could increase or decrease further depending on the number of new applications every year. India is followed by China with 67,031 in line for Green Cards.

Lawmakers favouring eliminating the per-country cap contend that such circumstances effectively encourage employers to sponsor prospective employment-based immigrants primarily from India.

Proponents argue that removing the per-country ceiling from employment-based immigrants would "level the playing field" by making immigrants from all countries more equally attractive to employers, the CRS said. 

According to the CRS, eliminating the per-country ceiling would reduce certain queues of prospective immigrants more quickly, and remove the perceived employer incentive to choose nationals from these countries over other countries. 

"Shorter wait times for LPR status might actually incentivise greater numbers of nationals from India, China and the Philippines to seek employment-based LPR status. If that were to occur, the reduction in the number of approved petitions pending might be short-lived.
"A handful of countries could conceivably dominate employment-based immigration, possibly benefitting certain industries that employ foreign workers from those countries, at the expense of foreign workers from other countries and other industries that might employ them," the CRS said. 

Because the Immigration and Nationality Act (INA) grants LPRs the ability to sponsor family members through its family-sponsorship provisions, removing the per-country ceiling would alter, to an unknown extent, the country-of-origin composition of subsequent family-based immigrants acquiring LPR status each year, it said. 


Changes in the country's demographic profile tilted towards people from one part of the world, was one of the prime reasons for the current per country quota. This, on the other hand, restricts the flow of the best talented foreign workers.



The INA allocates 140,000 visas annually for all five employment-based LPR categories, roughly 12 per cent of the 1.1 million LPRs admitted in fiscal 2017. It further limits each immigrant-sending country to an annual maximum of seven per cent of all employment-based LPR admissions, known as the per-country ceiling, or "cap". 

Two popular employment-based pools of foreign nationals, who have been approved as employment-based immigrants but must wait for statutorily limited visa numbers, totalled in excess of 900,000 as of mid-2018. Most originate from India, followed by China and the Philippines, the CRS said.

Some employers maintain that they continue to need skilled foreign workers to remain internationally competitive and to keep their firms in the US, it said. 

Proponents of increasing employment-based immigration levels argue it is vital for economic growth. Opponents cite the lack of compelling evidence of labour shortages and argue that the presence of foreign workers can negatively impact wages and working conditions in the US, the CRS said.

"Some argue that eliminating the per-country ceiling would increase the flow of high-skilled immigrants from countries such as India and China, who are often employed in the US technology sector, without increasing the total annual admission of employment-based LPRs," it added.

3 technologies that could define the next decade of cybersecurity 06-22

$
0
0




In little over a decade, cybercrime has moved from being a specialist and niche-crime type to one of the most significant strategic risks facing the world today, according to the World Economic Forum Global Risks Report2019. Nearly every technologically advanced state and emerging economy in the world has made it a priority to mitigate the impact of financially motivated cybercrime. 

The global experience of the past decade has largely been dominated by the emergence of a professional underground economy that provides scale, significant return-on-investment and entry points for criminals to turn a technical specialist crime into a global volume crime. The cybersecurity landscape in the past decade has been shaped by the targeting of financial institutions, notably with malware configured to harvest payment information and target financial platforms. The early cybercrime market that gave rise to the criminal online ecosystem was centred on the trading of harvested stolen credit cards, and some of the most high-profile and sophisticated global attacks focus on the penetration and manipulation of the internal networks of complex global payment systems. 

The Russian-speaking world has not been immune from these trends. Cyberattacks on financial organizations in Russia, Central Asia and Eastern Europe by some of the most sophisticated cybercrime gangs in the world have targeted clients, digital channels and networks. The Russian-speaking underground economy is one of the most active globally, with hundreds of fora and tens of thousands of users. Criminal groups exploit the margins of co-operation to conduct global campaigns, and their threat capacity is always adapting as groups work together in a borderless environment to combat technical defences. 



The past 10 years mark only the start of the global cybersecurity journey. New architectures and cooperation are required as we stand at the brink of a new era of cybercrime, which will be empowered by new and emergent technology. These three technologies might very well define the next 10 years of global cybersecurity: 

1. 5G networks and infrastructure convergence

A new generation of 5G networks will be the single most challenging issue for the cybersecurity landscape. It is not just faster internet; the design of 5G will mean that the world will enter into an era where, by 2025, 75 billion new devices will be connecting to the internet every year, running critical applications and infrastructure at nearly 1,000 times the speed of the current internet. This will provide the architecture for connecting whole new industries, geographies and communities - but at the same time it will hugely alter the threat landscape, as it potentially moves cybercrime from being an invisible, financially driven issue to one where real and serious physical damage will occur at a 5G pace. 

5G will potentially provide any attacker with instant access to vulnerable networks. When this is combined with the enterprise and operational technology, a new generation of cyberattacks will emerge, some of which we are already seeing. The recent ransomware attack against the US city of Baltimore, for example, locked 10,000 employees out of their workstations. In the near future, smart city infrastructures will provide interconnected systems at a new scale, from transport systems for driverless cars, automated water and waste systems, to emergency workers and services, all interdependent and - potentially - as highly vulnerable as they are highly connected. In 2017, the WannaCry attack that took parts of the UK’s National Health Service down took days to spread globally, but in a 5G era the malware would spread this attack at the speed of light. It is clear that 5G will not only enable great prosperity and help to save people’s lives, it will also have the capacity to thrust cybercrime into the real world at a scale and with consequences yet unknown. 

2. Artificialintelligence

To build cyber defences capable of operating at the scale and pace needed to safeguard our digital prosperity, artificial intelligence (AI) is a critical component in how the world can build global immunity from attacks. Given the need for huge efficiencies in detection, provision of situational awareness and real-time remediation of threats, automation and AI-driven solutions are the future of cybersecurity. Critically, however, the experience of cybercrime to-date shows that any technical developments in AI are quickly seized upon and exploited by the criminal community, posing entirely new challenges to cybersecurity in the global threat landscape. 

The use of AI by criminals will potentially bypass – in an instant – entire generations of technical controls that industries have built up over decades. In the financial services sector we will soon start to see criminals deploy malware with the ability to capture and exploit voice synthesis technology, mimicking human behaviour and biometric data to circumvent authentication of controls for people’s bank accounts, for example. But this is only the beginning. Criminal use of AI will almost certainly generate new attack cycles, highly targeted and deployed for the greatest impact, and in ways that were not thought possible in industries never previously targeted: in areas such as biotech, for the theft and manipulation of stored DNA code; mobility, for the hijacking of unmanned vehicles; and healthcare, where ransomware will be timed and deployed for maximum impact. 

3. Biometrics

To combat these emerging threats, biometrics is being widely introduced in different sectors and with various aims around the world, while at the same time raising significant challenges for the global security community. Biometrics and next-generation authentication require high volumes of data about an individual, their activity and behaviour. Voices, faces and the slightest details of movement and behavioural traits will need to be stored globally, and this will drive cybercriminals to target and exploit a new generation of personal data. Exploitation will no longer be limited to the theft of people’s credit card number, but will target theft of their being – their fingerprints, voice identification and retinal scans. 

Most experts agree that three-factor authentication is the best available option, and that two-factor authentication is a must. ‘Know’ (password), ‘have’ (token) and ‘are’ (biometrics) are the three factors for authentication, and each one makes this process stronger and more secure. For those charged with defending our digital future, however, understanding an entire ecosystem of biometric software, technology and storage points makes it still harder to defend the rapidly and ever-expanding attack surface.

What next?

Over the past decade, criminals have been able to seize on a low-risk, high-reward landscape in which attribution is rare and significant pressure is placed on the traditional levers and responses to crime. In the next 10 years, the cybersecurity landscape could change significantly, driven by a new generation of transformative technology. To understand how to secure our shared digital future we must first understand how the security community believes the cyberthreat will change and how the consequent risk landscape will be transformed. This critical and urgent analysis must be based on evidence and research, and must leverage the expertise of those in academia, the technical community and policymakers 

around the world. By doing this, the security ecosystem can help build a new generation of cybersecurity defences and partnerships that will enable global prosperity. 






These are the world's best universities by subject 06-23

$
0
0




If you want to pick the the best university for your chosen field of study, go to the U.S.
The QS World University Rankings show U.S. universities hold the top spot for virtually all subjects.

Harvard, Stanford and the Massachusetts Institute of Technology dominate in engineering and technology; natural sciences; and social sciences and management.

Arts and humanities and life sciences and medicine are the exceptions, with two UK universities – Oxford and Cambridge – in the top three for those areas.

The rankings group 48 disciplines into the five broad subject categories. They highlight the best universities in each category using research citations and surveys of employers and academics. In total, 1,200 universities in 78 locations are listed.
Outside of the top slots, Asian universities also get a good showing, in particular Nanyang Technological University Singapore and the National University of Singapore. 

Western universities have dominated the top tier of higher education tables for years, but providers in Asia are becoming increasingly visible players in the elite and funding for education is on the rise.

Globalisation is rapidly changing the education sector. As the global middle class expands, there will be increasing demand for higher education, particularly in China and India.
The battle to host international students – and benefit from the income and expertise they bring – is already on. Governments from the UK to Japan are bringing in measures to attract the best minds.

Using body heat to speed healing 07-26

$
0
0




Bioinspired wound dressing contracts in response to body heat. 

Cuts, scrapes, blisters, burns, splinters, and punctures — there are a number of ways our skin can be broken. Most treatments for skin wounds involve simply covering them with a barrier (usually an adhesive gauze bandage) to keep them moist, limit pain, and reduce exposure to infectious microbes, but they do not actively assist in the healing process.
More sophisticated wound dressings that can monitor aspects of healing such as pH and temperature and deliver therapies to a wound site have been developed in recent years, but they are complex to manufacture, expensive, and difficult to customize, limiting their potential for widespread use.
Now, a new, scalable approach to speeding up wound healing has been developed based on heat-responsive hydrogels that are mechanically active, stretchy, tough, highly adhesive, and antimicrobial: active adhesive dressings (AADs). Created by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University, the Harvard John A. Paulson School for Engineering and Applied Sciences (SEAS), and McGill University, AADs can close wounds
significantly faster than other methods and prevent bacterial growth without the need for any additional apparatus or stimuli. The research is reported in Science Advances.
“This technology has the potential to be used not only for skin injuries, but also for chronic wounds like diabetic ulcers and pressure sores, for drug delivery, and as components of soft robotics-based therapies,” said corresponding author David Mooney, a founding core faculty member of the Wyss Institute and the Robert P. Pinkas Family Professor of Bioengineering at SEAS.
AADs take their inspiration from developing embryos, whose skin is able to heal itself completely, without forming scar tissue. To achieve this, the embryonic skin cells around a wound produce fibers made of the protein actin that contract to draw the wound edges together, like a drawstring bag being pulled closed. Skin cells lose this ability once a fetus develops past a certain age, and any injuries that occur after that point cause inflammation and scarring during the healing process.
To mimic the contractile forces that pull embryonic skin wounds closed, the researchers extended the design of previously developed tough, adhesive hydrogels by adding a thermoresponsive polymer known as PNIPAm, which both repels water and shrinks at around 90 degrees Fahrenheit. The resulting hybrid hydrogel begins to contract when exposed to body heat, and transmits the force of the contracting PNIPAm component to the underlying tissue viastrong bonds between the alginate hydrogel and the tissue. In addition, silver nanoparticles are embedded in the AAD to provide antimicrobial protection.
          
“This technology has the potential to be used not only for skin injuries, but also for chronic wounds like diabetic ulcers and pressure sores, for drug delivery, and as components of soft robotics-based therapies.”
— David Mooney
“The AAD bonded to pig skin with over 10 times the adhesive force of a Band-Aid and prevented bacteria from growing, so this technology is already significantly better than most commonly used wound protection products, even before considering its wound-closing properties,” said Benjamin Freedman, a Graduate School of Arts and Sciences’ postdoctoral fellow in the Mooney lab who is leading the project.
To test how well their AAD closed wounds, the researchers tested it on patches of mouse skin and found that it reduced the size of the wound area by about 45 percent compared to almost no change in area in the untreated samples, and closed wounds faster than treatments including microgels, chitosan, gelatin, and other types of hydrogels. The AAD also did not cause inflammation or immune responses, indicating that it is safe for use in and on living tissues.
Furthermore, the researchers were able to adjust the amount of wound closure performed by the AAD by adding different amounts of acrylamide monomers during the manufacturing process. “This property could be useful when applying the adhesive to wounds on a joint like the elbow, which moves around a lot and would probably benefit from a looser bond, compared to a more static area of the body like the shin,” said co-first author Jianyu Li, a former postdoctoral fellow at the Wyss Institute who is now an assistant professor at McGill University.
The team also created a computer simulation of AAD-assisted wound closure, which predicted that AAD could cause human skin to contract at a rate comparable to that of mouse skin, indicating that it has a higher likelihood of displaying a clinical benefit in human patients.
“We are continuing this research with studies to learn more about how the mechanical cues exerted by AAD impact the biological process of wound healing, and how AAD performs across a range of different temperatures, as body temperature can vary at different locations,” said Freedman. “We hope to pursue additional preclinical studies to demonstrate AAD’s potential as a medical product, and then work toward commercialization.”
Additional authors of the paper include co-first author Serena Blacklow, a former member of the Mooney lab who is now a graduate student at the University of California, San Francisco; Mahdi Zeidi, a graduate student at University of Toronto; and Chao Chen, a former graduate student in SEAS who is now a postdoc at UMass Amherst.
This research was supported by the National Institutes of Health, The Wyss Institute for Biologically Inspired Engineering at Harvard University, the National Sciences and Engineering Research Council of Canada, the Canada Foundation for Innovation, and the Harvard University Materials Research Science and Engineering Center.


Pain is a signal of vulnerability 08-18

$
0
0

Disclaimer: I am not a medical professional and this is not medical advice. This post only states my beliefs as a result of my research on the topic. Full disclaimer .

Millions of people are affected by chronic pain. For some of them, there is an injury or a disease causing it. For others, their mind is causing the pain.
This essay explains the rationale behind this behavior of our mind and explores some ways to cure such kind of chronic pain.

The purpose of pain

Pain serves a purpose. When we twist our ankle, it becomes painful. This is good. It makes us aware that we twisted it, and it ensures that we do not step on it before it has healed.

But pain is not a signal of damage

Many believe that pain is a signal of damage occurring somewhere in our body. This is incorrect: pain is a signal that we are vulnerable, that damage can occur to us in the future. Most times, we are vulnerable because of damage (as in the example of the twisted ankle), hence the confusion between pain as a signal of damage and pain as a signal of vulnerability. I will try to explain why pain is indeed linked to vulnerability, but not (necessarily) to damage.
Not all times when there is damage in our body we feel pain, and not all the times we feel pain there is damage in our body. However, every time that we feel pain there is a potential for future damage (a vulnerability).
Here are some examples of cases in which our body has been damaged and yet, pain is not felt:
  • There are numerous reports of soldiers in World War I & II who lost a limb in an explosion and yet, did not feel any pain. Why? Because the injury would mean they would not have to fight anymore and that they would soon be sent home. They felt less vulnerable to death in a hospital bed with a severed limb than when they had to fight in the trenches. The future feels safe(r), so no pain. As I will show later, the pain neurons in the affected limb still fire to signal pain, but the signal is probably suppressed by their brain. Now that the soldier is in a boat on the way home, it has no utility towards changing the behavior of the soldiers: they are now attended by doctors and on their way home.
  • When doing sports, it might happen that we procure ourselves a small injury. Often, we do not feel the pain until after we stop the physical activity. This is likely a consequence of the fact that sports are often not very different from activities such as fighting or fleeing. In such cases, there is a much higher risk of damage to our body if we rest in front of an enemy or predator which might kill us rather than if we step on that twisted ankle or keep using that muscle we just sprained. As a consequence, using the damaged body part is perceived as less dangerous than not using it, and no pain is felt (at least after the first initial seconds). The pain signal is suppressed.
Examples of cases in which we feel pain but there is no actual damage in the body:
  • If your finger touches a hot pot, you will feel pain, even if the touch only lasted for an instant and your skin did not receive any actual damage. In this case, pain was not a signal that your skin got damaged (it didn’t), but a signal that your skin might get damaged if the behavior persists (i.e. your skin is vulnerable).
  • Similarly, when exposed to extreme cold for a short amount of time, your skin will feel painful. It is not damaged yet, but it will assume, if the exposure continues. Pain is not a signal of current damage, but of future one.
  • Science writer Erik Vance tells us an interesting story. Sitting in a lab, he got administered electrical discharges. When the screen in front of him would turn red, he would receive a stronger charge; when it would turn green, he will receive a lighter one. After following this pattern for a few minutes, the researchers changed the rule: all discharges will be strong ones. Erik, who was not aware of the change, still perceived little pain when the screen was green, even though the actual discharges were strong. His brain overrode reality with expectations (of imminent body damage). Expecto, ergo est.
  • Psychosomatic pain. A condition for the emergence of psychosomatic pain is a perceived condition of generalized vulnerability (and, by association, future physical damage). I will clarify this later in the chapter.
I can draw an example of psychosomatic pain from my personal experience. When I was 23, I noticed a fast-growing black mole on my right foot. I did not feel any pain, but I decided to visit a dermatologist anyway. After the examination, he told me that the mole could degenerate if left unchecked. We scheduled the removal for the following week. During the 7 days between the dermatologist’s visit and the surgical operation, my foot ached. Of course, there was no good reason for my foot for aching: I had no injury and there is no way that a tiny mole, even if malignant, is painful. However, there I was, feeling pain. Why? During the first visit, dermatologist said that the mole could degenerate if not removed. My brain captured that information and inferred that my foot was vulnerable. To ensure that I would stay focused on the task of removing the mole, it made me feel pain. (How physiologically my brain managed to make me feel pain will be explained later.)

Damage does not have to be physical

Physical damage can predict further physical damage. For example, if our ankle is twisted, we are in a vulnerable situation. Not only stepping on our ankle can make the injury worse, but a twisted ankle is also a liability in case we have to escape from a predator or fight with an enemy. Physical damage is a vulnerability.
However, current physical damage is not the only predictor of future physical damage. Also psychological damage, social damage and lack of resources predict future physical damage. When we are healthy but in a condition that might lead to us being unhealthy in the future, we are vulnerable.
Some examples:
  • If I lack the resources I need for living (food, money, sheltering, etc.), I am at a higher risk of physical deterioration.
  • If I did something wrong which hurt other members of my community, I face the risk of being isolated and ultimately ostracized by my community (social damage). Alone, it is much harder to find the resources needed for living and the help or support in case of need: over time, physical deterioration might follow.
  • If I am depressed (psychological damage), I am a less attractive mate and a worse friend. This might lead me to get left alone by my previous friends. As explained in the previous bullet point, if alone, I am more likely to face a shortage of resources and external support, which directly leads to a higher risk of physical damage or deterioration.
Therefore, it makes sense for my body and brain to consider lack of resources, social damage and psychological damage as a vulnerability and thus akin to a risk of physical damage. Such risk of physical damage then manifests as pain.However, such vulnerability and pain are generalized: they are linked to our personal situation, but not to any specific part of the body. If our body does not have reasons to target a specific part of our body with pain, it manifests the generalized vulnerability as stress. In the next section I will explain this process.

Stress as generalized vulnerability

First, let me explain the purpose of stress. The feeling of being vulnerable is used to trigger reactions and to find solution to the root cause of the vulnerability (how this takes place in practice will be the topic of the next section). However, such reactions, like all actions and reactions initiated or mediated by our brain, have to undergo motivational gating by the basal ganglia. If such reactions are repressed (they do not manage to overcome the motivational gating), no solution or damage mitigation plan is found, and the vulnerability keeps going unaddressed. What started as a clear signal of vulnerability is now a generalized signal of vulnerability. Our brain still knows that something is wrong, even if repression through motivational gating does not allow it to know exactly what is wrong. Nevertheless, something has to be done. The signal that something has to be done is stress.
Our brain is mostly an inference machine. The role of each neuron or group of neurons is to recognize a pattern in the inputs it receives. Such inputs are of three types: sensorial data (bottom-up), context (lateral), and expectations (top-down). Let’s see a (much simplified) example of how this works. Let’s say that a neuron’s job is to fire when it recognizes a dog. This means that it will fire if it recognizes sensorial input corresponding to four paws, a body of a certain size, a fur, a head, and a tail. If the only sensorial input is a tail, it will generally not fire. It needs a minimum number of sensorial stimuli matching the pattern of what a dog looks like in order to fire. Such minimum number is called the admissibility threshold. For example, it might fire when 4 of the 5 visual characteristics of a dog listed above are recognized. However, top-down expectations might reduce the number of sensorial data needed for the neuron the fire. If I am at home and I own a dog, I expect to see it in the living room. If through the kitchen door I only see a tail, my neuron will fire: it is highly probable that that tail means that my dog is there. In other words, top-down expectations reduce the admissibility threshold and thus the amount of proof (sensorial stimuli) needed to conclude that what is expected is indeed there.
Now, enter stress. Stress is a state of generalized vulnerability: our brain knows that we are vulnerable, but such vulnerability is not associated with a specific body part and thus does not generate pain. However, because of the vulnerability, our brain has a higher expectation of feeling pain. This top-down expectation lowers the sensory threshold for experiencing pain. My hypothesis is that, because of this top-down expectation, our brain will be more likely to find admissible sensorial signals that are precursors of pain. Let’s see an example: I do some work in the backyard. Usually, I would need to lift a very high load to damage my back. Let’s say that lifting 50 kg would cause my back to suffer damage (such as a herniated disc). At this point, I will suffer pain: a signal that I need to rest; otherwise, I will suffer worse injuries. Even after healing, my brain is likely to remember that lifting 50 kg might cause back damage. The next time I lift 50 kg, even if my back does not actually get injured, I am likely to feel pain: a signal that I’m vulnerable to damage. Now, let’s imagine that I am in a stressful period of my life. This time, the pain threshold will be lower (due to the stress). One of two phenomena is likely to occur:
  • Since the pain threshold is lower, the sensorial signals sent by my back to my brain when I lift 40 kg are enough to cause pain. I might think that I am injured and got a herniated disc, even if my back does not actually have any.
  • A generalized vulnerability manifests as stress because it does not have any admissible location where to manifest as body pain. Lifting the weight gives my brain an admissible location where to feel the pain: my back. So, I feel pain there.
(In some cases of chronic pain, doctors recommend serotonin uptake inhibitors. They work because lack of serotonin signals a lack of resources – which is a condition of vulnerability and therefore causes the admissibility threshold for pain to lower.)
You might feel skeptical: can our brain really imagine pain? Why would it do that to itself? Let me show you some medical results that suggest it really is like that.

John Sarno’s patients

In his books “The Mindbody Prescription” and “The Divided Mind”, Dr. John Sarno describes numerous cases of patients with psychogenic pain: pain engineered by our brain. Dr. Sarno would make an objective analysis to eliminate other diagnoses. Then, he would interview them and notice that they were very stressed. Then, he will tell them that their pain is psychogenic: it originated in their brain. They didn’t feel pain because something was wrong in the body, but because their brain was making them feel pain. Their brain, he would explain them, was doing so in order to distract them from thinking about inadmissible thoughts about themselves, which were unconsciously generating guilt, shame, and other emotions which would all create stress (Author’s note: I do not agree on this very point – I will explain my theory later). In some patients, the pain vanished over the next 24 hours; others needed to attend a few group sessions in which such mechanics would be explained more in depth. The results are surprising: after having worked with Sarno, about 85% of his patients reported improvements in their condition, 44% of them reporting little or no pain.
How could the brain of his patients generate pain? Sarno hypothesized that the brain achieved this result by contracting some blood vessel and generating mild ischemia (deprivation of oxygen) in the target tissues. If you doubt the ability of the brain to alter the size of blood vessels, just think about how we blush after doing something embarrassing: our brain increased the diameter of the blood vessels in our cheeks.

Arthroscopic surgeries of the knee

Sham surgeries are fake interventions where the patient gets transported into the operation room and anesthetized. However, the doctors do not actually perform any surgery – they merely make the patient believe they did. The results have been surprisingly positive: for example, sham surgeries for arthroscopic surgery of an osteoarthritic knee proved to be as effective as real ones. This means that at least some cases of knee pain are not caused by actual body damage, but by the perception of a state of vulnerability.

Herniated discs

In his book “The Mindbody Prescription”, Dr. John Sarno reports a study published in the journal Spine. Doctors made lumbar CT scans to a group of people without lower back pain: they found disc abnormalities, stenosis and other aging changes in 50% of patients over 40 years old. Such abnormalities were not causing any pain. Contrast this with the common procedure when a patient tells a doctor he has been suffering from back pain. Often, a scan of his back is ordered; if an abnormality is found, such as a herniated disc, 
responsibility for the pain is attributed to it. Sometimes, surgeries are even recommended. However, if herniated discs are present also in painless patients, how can we be sure that they are the cause of the pain in patients with back pain? There is a chance, writes Dr. Sarno, that at least in some of the patients with chronic back pain, the herniated disc is not the cause of the pain (or it is the cause of its onset, but not of its persistence). In such cases, psychogenic pain would be the cause.

The sources of pain

There are three sources of pain:
  • Nociceptive pain: this is the pain we generally refer to in common talking. It is caused by external harmful factors which excites our nociceptors (the neurons responsible to detect extreme heat, extreme cold, wounds, impacts, and so on). This kind of localized pain causes a bottom-up inference of a localized vulnerability.
  • Psychosomatic pain: this pain takes place when nothing is wrong in our body and, nevertheless, our brain infers from our present situation that we are generally vulnerable (e.g. to social isolation, lack of resources, etc.). This inference generates stress and a top-down expectation of pain, which is eventually manifested in the most admissible body location.
  • Psychogenic pain: technically, this is a concurrence of the two previous cases. However, I listed it as a separate occurrence to highlight the specific process. As for psychosomatic pain, our brain expects a body part to manifest pain. Differently from purely psychosomatic pain, this top-down expectation triggers some physiological processes (such as mild ischemia – a lack of oxygen delivered to muscles[6]) which in turn trigger nociceptive pain. In this case, there is something unusual with our body, but only because of signals given by our brain causing the unusual condition in our tissues.
Some might be skeptical that our brain indeed can generate changes in our body which in turn provoke pain. But just think about how our cheeks turn red when we are embarrassed: our brain can very easily initiate such physiological processes.

Chronic pain

Many theories on chronic pain propose that its onset is triggered by an episode of extreme stress. I believe that this is partially true. My hypothesis is that stress is not the trigger of chronic pain, but the enabler. There would be little reason for pain to persist if the vulnerability (the stress) is gone. Rather, it makes a lot of sense that pain persists as long as the vulnerability is there, and that it goes away once the vulnerability is gone.

Placebos

Professor Nicholas Humphrey tells a story about placebos and hamsters. “Suppose a hamster is injected with bacteria which makes it sick – but in one case the hamster is an artificial day/night cycle that suggests it’s summer; in the other case, it’s in a cycle that suggests it’s winter. If the hamster is tricked into thinking it is summer, it throws everything it has got against infection and recovers completely. If it flings to study its winter, then it just mounts a holding operation, as if it is waiting until it knows it’s safe to mount a full-scale response. […] In winter, we are conscious about deploying our immune resources. That’s why a cold lasts much longer in winter than it does in summer. […]  Placebos work because they suggest to people that the picture is rosier than it really is. […]  Placebos give people fake information that it’s safe to cure them.”
The purpose of pain is to modify our behavior in such a way that we reduce our exposure to the vulnerability causing the pain. This might include finding a solution (e.g. treating a wound), dispensing ourselves from the potential source of harm (e.g. taking our finger away from a hot pot) and putting ourselves in the situation where it is safe to heal (e.g. staying in bed). Rory Sutherland said: “[Placebos are signals that] now it’s a really good time to invest big in getting better.” Healing is often a costly process. Knowing whether now the time is to address the root source of harm is an important ability. Our body has to commit resources (energy, nutrition, attention, time, antibodies, etc.) which would be better used elsewhere. Here are some examples of situations where it is not a good idea to heal:
  • A wolf bites my hand. Instead of treating the wound, I better fight the animal or flee.
  • The harvest is late and we are suffering from malnutrition. Instead of lying in the bed to reduce energy consumption, we should work in the field to ensure that we will have the maximum number of vegetables once they will be ready.
  • I sprained a leg muscle during a long hike in the mountains. The optimal reaction is to keep using the muscle, although with caution, to reach home, where it will be safe to rest and heal.
Our brain evolved the ability to infer whether now is a good time to commit the resources required for healing or to persist in our default behavior unmodified by pain. Many factors, stress being the most important one, can influence this inference. In particular, stress (which is a signal of a generalized vulnerability) might suggest to our brain that now it is not the time to heal, and that it is appropriate to feel pain is a signal that we are not safe yet and a reaction is needed.
Placebos work by suggesting us that we are safe. If we are safe, there is no need any more to save resources for fleeing or addressing the external source of harm. Instead, we can now commit them to healing. 

Behavioral placebos

In the previous paragraphs, I said that placebos are suggestions that we are safe and can commit resources to heal. They are permissions to heal.
There is a second definition of placebo, which is more interesting from a behavioral point of view: Placebos as permission to change. They are a narrative we can tell ourselves to justify a change in our behavior.
Here is a story to use as an example: Elbert always wanted to dress in a more elegant way; however, he never did so as it “wouldn’t fit him”. He already had some suits in the wardrobe, and he liked to use them during weddings and other formal events. However, he could not get himself to wear them in any other occasion. He feared that others would ask questions such as “why did you start dressing elegantly” and “why didn’t you do it before?”. These thoughts prevented him from dressing up for a long time. One day, he received permission to change: his wife gifted him with a suit. Finally, he got a coherent narrative to justify himself wearing one in informal occasions. It would not be his choice: it would be his wife’s.
Comedians do not only know a large number of funny jokes, they are also very good at giving us permission to laugh. Similarly, a lady might have different perceptions of a serenade, based on whether it is performed by a cool handsome guy or by a shy and uncool one. In the former case, it is perceived as a romantic gesture; in the latter, as a creepy one.
Rory Sutherland said: “Trumpets and marching are bravery placebos”. Placebos allow us to be confident. (As a side note: if being confident is a good thing, why aren’t we all always confident? It is because being confident isn’t always a good thing. Often, it is a bad idea to being confident when there aren’t reasons to be confident. For example, it can lead to being perceived as arrogant, as a bully, and lead to shame and to being ostracized. This is why our brain had to evolve the ability to infer from the situation at hand when to be confident, and when not to. Lack of confidence, like all bad feelings and emotions, has an overall beneficial purpose or is the necessary byproduct of something beneficial).

Conclusion

Physical damage is only one of the causes of pain.
If you suffer from chronic pain with no clear physiological cause or with a physiological cause which doesn’t seem to heal, consider the possibility that your unconscious self might feel so threatened that it believes that pain is an appropriate reaction.
In that case, two approaches might be beneficial: placebos, and taking care of those sources of stress which are making your unconscious self feel vulnerable.

This Common Medication Could Save You From Severe COVID, New Study Says 08-02

$
0
0




Every day that you don't become infected with COVID-19 buys experts time to discover the treatments that could someday save your life. 

In fact, one recent study out of Canada, which has yet to be peer reviewed, found that an anti-inflammatory medication already on the market as a treatment for gout could greatly reduce the rate of hospitalization and death in COVID-19 cases. 

According to the researchers out of Montreal Heart Institute, colchicine is cheap, orally administered, and has few known side effects. Read on to learn more about this promising treatment, and for other essential COVID news, check out If You Take These OTC Meds, You Have to Stop Before Getting the Vaccine

The researchers assembled a group of COVID patients with mild illness and at least one underlying condition such as heart disease or diabetes. Half of the group received colchicine and the other half received a placebo for 30 days. Ultimately, they found that the risk of "death or hospitalization due to COVID-19 infection in the 30 days following randomization was lower among the patients who were randomly assigned to receive colchicine than among those who received placebo." Additionally, they reported that the control group had fewer cases of pneumonia as well as a reduced need for supplemental oxygen.

The results were even more pronounced when they controlled for subjects who were diagnosed with COVID via PCR test. "When the 93 percent of patients who had a formal diagnosis of COVID-19 are considered, the benefit of colchicine [as defined by reduced rates of hospitalization and death] was more marked (25 percent) and statistically significant," the researchers wrote.

While more research is needed, this could have major implications for those with underlying health conditions. "Given that colchicine is inexpensive, taken by mouth, was generally safe in this study, and does not generally need lab monitoring during use, it shows potential as the first oral drug to treat COVID-19 in the outpatient setting," the researchers said. Not only would this be beneficial to the outcomes of individual patients, it would also free up hospital beds and conserve resources for the most severe COVID cases. 


Wondering what other medications may improve your COVID outcome? Read on for more promising treatments that are currently under review, and for essential tips on avoiding COVID, check out These 3 Things Could Prevent Almost All COVID Cases, Study Finds. 


Xlear Sinus Care

Woman Using Nasal Spray For Her Cold
Shutterstock

According to a study posted on Dec. 21, which has yet to be peer-reviewed, researchers found that an over-the-counter nasal spray known as Xlear Sinus Care may help neutralize COVID in nasal cavities. Though the spray has only been formally tested in vitro trials so far, it was able to greatly reduce the amount of active COVID virus after 25 minutes.

Ingredients of the spray include xylitol—a chemical compound often used as a sweetener—as well as .2 percent grapefruit seed extract (GSE) and .85 percent saline. "Combination therapy with GSE and xylitol may prevent spread of viral respiratory infections not just for SARS-CoV-2 but also for future H1N1 or other viral epidemics. GSE significantly reduces the viral load while xylitol prevents the virus attachment to the core protein on the cell wall," the study authors wrote. And for more on what can reduce your COVID risk, check out These 3 Vitamins Could Save You From Severe COVID, Study Finds.

2
Remdesivir

iv drip hangover cures

According to the Mayo Clinic, the FDA has approved the antiviral drug Remdesivir to treat COVID-19 in adults and children who are 12 years of age and older. Given intravenously, this drug is used to shorten the length of infection in those who have been hospitalized with COVID-19.

The National Institute of Health (NIH) explains that the drug has "demonstrated in vitro activity against SARS-CoV-2" and helped to lower the rate of lung damage in COVID infected rhesus macaque monkeys. And for more on severe coronavirus cases, check out If You've Done This, You're Twice as Likely to Develop Severe COVID.

3
Monoclonal antibody treatments

Two healthcare workers wearing full protective gear care for an intubated patient in the ICU who is suffering from COVID.
iStock

Monoclonal antibody treatments such as Regeneron work by mimicking our own immune response to a COVID threat. According to BBC News, "antibodies physically stick to the coronavirus so they can't get inside the body's cells and they make the virus more 'visible' to the rest of the immune system."

While Regeneron was one of the earliest treatments approved to fight existing COVID cases, researchers have more recently discovered that the drug may work preventatively—meaning before individuals are exposed to the new coronavirus. In one new study conducted by UVA Health, the researchers found that when 186 individuals received the antibody cocktail and were subsequently exposed to COVID, none developed symptomatic cases of COVID. While Regeneron is unlikely to provide long-term preventative protection, it could be beneficial to those who face elevated risk of infection for a short period of time—for example, an individual who is taking care of an infected relative. And for more COVID news delivered right to your inbox, sign up for our daily newsletter.

4
Aspirin

Shutterstock

New research shows that those who take a daily dose of aspirin may experience better COVID outcomes than those who do not. An October study published in the journal Anesthesia and Analgesia looked at the medical records of more than 400 coronavirus patients who were hospitalized from March to July as a result of the virus. They found that over 23 percent of those patients were taking a daily low dose of aspirin either shortly before or soon after being admitted to the hospital to manage cardiovascular disease.

Of those hospitalized COVID patients, those who took a daily low dose of aspirin were 43 percent less likely to be put in the intensive care unit (ICU) and 44 percent less likely to be placed on a ventilator. Those same patients were also 47 percent less likely to die from COVID than the hospitalized patients who were not taking aspirin daily. And to avoid contracting COVID in the first place, beware that This Is Where You're Most Likely to Catch COVID Now, New Study Says.


View at the original source 


Please also read  

These 3 Things Could Prevent Almost All COVID Cases, Study Finds






These 3 Things Could Prevent Almost All COVID Cases, Study Finds

$
0
0

 THESE SIMPLE MEASURES COULD SIGNIFICANTLY SLOW THE SPREAD OF THE VIRUS, ACCORDING TO EXPERTS.


Specific public health measures may be as effective at fighting COVID as a vaccine, a new study reports. The research, published in the Annals of Internal Medicine on Jan. 13, reveals that three specific practices combined can reduce your likelihood of spreading or catching COVID by up to 96 percent. 

Read on to discover what can help slash your COVID risk, and for more ways to stay healthy, Doing This to Your Mask Could Keep You Even Safer From COVID, Experts Say

Specific public health measures may be as effective at fighting COVID as a vaccine, a new study reports. The research, published in the Annals of Internal Medicine on Jan. 13, reveals that three specific practices combined can reduce your likelihood of spreading or catching COVID by up to 96 percent. Read on to discover what can help slash your COVID risk, and for more ways to stay healthy, Doing This to Your Mask Could Keep You Even Safer From COVID, Experts Say.

This Is What's Keeping Americans Up at Night


In a computer-simulated college campus setting, researchers at Case Western Reserve University found that practicing social distancing and implementing mandatory mask use could prevent 87 percent of COVID infections on college campuses. However, combining social distancing and mandatory mask use with regular COVID testing could prevent between 92 and 96 percent of new COVID cases, according to the study's authors—rates similar to the efficacy of the Moderna and Pfizer vaccines currently available in the U.S.

"It is clear that two common non-medical strategies are very effective and inexpensive—and allow for some in-person instruction," explained Pooyan Kazemian, the study's co-senior author and an assistant professor of operations at the Weatherhead School of Management at Case Western Reserve. Read on to discover just how effective a variety of other prevention methods would be, and for more insight into where COVID is spreading, check out This Is How Bad the COVID Outbreak Is in Your State. 

Social distancing alone would only reduce COVID by a minimal amount.

two women walking on a path from behind
Shutterstock

While social distancing may be an effective means of reducing infection when combined with masks, if it's implemented alone, it won't do much, the study's authors found. In fact, "minimal social distancing" would only reduce infection rates by 16 percent. And for more on staying safe, check out If You're Not Doing This, Your Mask Won't Protect You, Study Says.


Portrait of bored young male student wearing glasses sitting at laptop leaning on arm
iStock

Though it may seem like an effective means of lowering the spread of COVID among students, researchers found that doing online-only instruction actually meant more students ended up contracting COVID than if they were doing in-person instruction while taking preventative measures. The study's researchers found that switching to entirely online instruction would lower COVID rates on campus by just 63 percent, versus the 87 percent reduction accomplished through masking and distancing. And for the latest COVID news delivered straight to your inbox, 

Without any public health measures in place, the majority of students would catch COVID.

young asian woman coughing or sneezing into her elbow in an office
Shutterstock/fizkes

In the absence of social distancing and mandatory mask-wearing, COVID would spread like wildfire, the study's authors found. According to their research, close to 75 percent of students and close to 17 percent of faculty would be infected if neither measure was abided by. And for more on the latest with the vaccine, check out If You Take These OTC Meds, You Have to Stop Before Getting the Vaccine.

Mask wearing and practicing social distancing are the lowest cost options for preventing COVID.

woman with a face mask and ear muffs standing outside in the winter
iStock

Researchers found that the combination of mandatory mask use and social distancing could be done at a moderate cost. According to the study's model, the two measures would result in an associated cost of $170 per averted infection. And for the latest on the COVID variants recently discovered in the U.S., see why Dr. Fauci Says This One New COVID Strain Is "Disturbing." 

Adding testing to the mix would increase the cost dramatically.

Close up of a doctor doing a medical exam while both him and the patient are wearing protective masks
iStock

While the significant jump in efficacy associated with adding testing to on-campus COVID protocols may be appealing, including regular testing raises the cost of keeping students safe significantly. Depending on how often students are tested, the cost of combining all three measures could range between $2,000 and $17,000 for every case of COVID they prevent, researchers found. And for more insight into COVID testing, find out why The CDC Just Made This COVID Precaution Mandatory. 

View at the original  source 

Please also read   

This Common Medication Could Save You From Severe COVID, New Study Says 08-02


Hearing Acrobatics 02-10

$
0
0

 

Dynamic, delicate connection between protein filaments enables hearing. 




The sense of hearing is, quite literally, a molecular tightrope act. Turns out, it involves acrobatics as well. 

In a paper published today in Nature Communications, researchers from the Wyss Institute for Biologically Inspired Engineering at Harvard University, Harvard Medical School (HMS), and Boston Children’s Hospital show that a dynamic and delicate connection between two pairs of diminutive protein filaments plays a central role in in hearing.

The tension held by these filaments, together called a tip link, is essential for the activation of sensory cells in the inner ear. The team’s analyses reveal that the filaments, which are joined end-to-end, work together like trapeze artists holding hands. Their grasp on each other can be disrupted, by a loud noise, for example. But with a two-handed grip, they can quickly reconnect when one hand slips.

The findings present a new understanding of the molecular underpinnings of hearing, as well as the sense of balance, which arises from similar processes in the inner ear. Disorders of deafness and balance have been linked to mutations in tip links, and the study results could lead to new therapeutic strategies for such disorders, according to the authors.

“This tiny apparatus, made of less than a dozen proteins, is what helps change sound from a mechanical stimulus into an electrical signal that the brain can decipher,” said co-corresponding author David Corey, Ph.D., the Bertarelli Professor of Translational Medical Science at HMS. “Understanding how these proteins work provides insights into the secrets of the sensation of sound.”

The dynamic connection between the filaments may also function as a circuit breaker that protects other cellular components, according to the researchers.

“I think our study gives us a sense of awe for how perfectly engineered this system in the ear is,” said co-corresponding author Wesley Wong, Ph.D., an Associate Professor at the Wyss Institute, HMS, and Boston Children’s. “It maintains a delicate balance between being just strong enough to carry out its function but weak enough to break to potentially preserve the function of other elements that can’t be as easily reformed.” 

Decoding the handshake

For hearing to occur, cells must detect and translate pressure waves in the air into bioelectrical signals. This task falls upon hair cells, the sensory cells of the inner ear. Protruding from these cells are bundles of hair-like structures, which bend back and forth as pressure waves move through the inner ear.

Tip link filaments physically connect each hair to another and are anchored onto specialized ion channels. As the bundle moves, the tension of the tip links changes, opening and closing the channels like a gate to allow electric current to enter the cell. In this way, tip links initiate the bioelectrical signals that the brain ultimately processes as sound.

In previous studies, Corey and colleagues explored the composition of tip links and identified the precise atomic structure of the bond between the two protein filaments. Intriguingly, this bond was evocative of a molecular handshake, according to the authors.

In the current study, Corey, Wong, and the team set out to understand the nature of this handshake. To do so, they applied single-molecule force spectroscopy, a technique that often uses optical tweezers—highly focused laser beams that can hold extremely small objects and move them by distances as short as a billionth of a meter.

The researchers, led by study first authors Eric Mulhall and Andrew Ward, Ph.D., both research fellows in neurobiology in the Blavatnik Institute at HMS, coated microscopic glass beads with strands of either protocadherin-15 or cadherin-23, the two proteins that make up the tip link. Using optical tweezers, they moved beads close to each other until the protein strands stuck together end to end and then measured the forces needed to pull the bonds apart. 

Stronger than the sum

Each tip link is made up of two strands of both proteins. The team found that the strength of this double-stranded bond far surpassed the strength of the bond between individual strands of either protein. Under low tension, a double-stranded bond lasted ten times longer than a single-stranded bond before breaking.

This increased strength appears to be due to the dynamic nature of the connection, according to the authors. Rather than acting as a simple static rope, the filaments detach and reattach to each other within tenths of a second. A force may break one pair of strands apart, but the other pair can remain connected long enough for the broken pair to rejoin.

At extremely high forces, however, the double-stranded bond breaks rapidly. This feature may help to prevent catastrophic damage to other components of the hair cell, the authors said.

“If the tip link were super strong, then when exposed to a very loud sound it might rip the whole complex out of the cell membrane, which would be hard to recover from,” said Wong. “The ability to break with loud sounds is analogous to a mechanical circuit breaker,” he added. “This use of multiple weak bonds to form a tunable biological circuit breaker could potentially be very interesting for synthetically engineered systems.”

Surprisingly, the team found that under resting tension, each tip link lasts only around eight seconds before it breaks. Their analyses, coupled with evidence from other studies, suggest that new tip links can form rapidly from other strands of protein nearby. Together, the results support a new paradigm of highly dynamic tip link formation and rupture that both enables and protects hearing.

It’s hard to fix something if you don’t really know what’s broken, and we are optimistic that a better understanding can help lead to new solutions.

DAVID COREY

The team also looked at mutations to protocadherin-15 that are linked to Usher syndrome, a rare hereditary disorder of deafness and blindness. Their experiments suggest that some of these mutations can greatly weaken the bond between the tip link filaments. This may be why the disorder leads to deafness, and further mechanistic understanding of this process could lead to new therapeutic approaches, the authors said.

“It’s hard to fix something if you don’t really know what’s broken, and we are optimistic that a better understanding can help lead to new solutions,” Corey said.

In addition, the new findings may help inform study in other areas of the body.

“We have many different mechanical senses besides hearing, such as touch, the sensation of blood pressure, and certain types of pain,” Corey added. “We understand hearing in more molecular detail than any of the others—knowledge that can help us probe the workings of other mechanical senses.”

Additional authors of the study include Wyss Research Scientist Darren Yang, Ph.D. and former HMS graduate student Mounir Koussa, Ph.D.

The work was supported by the National Institutes of Health. 

View at the original source 




The importance of building a Personal Brand 02/10

$
0
0


 While marketers and salespeople know well the importance of personal branding, not everyone will have considered the benefits. In the age of social media, the importance of personal branding is gaining ground. This infographic offers some clear advice and steps for building your own personal brand.

Every one of us is unique. We all have our own individual skills, training, talents and experience, but do we share them effectively with potential employers, business partners and customers? Whether or not you create a personal brand, one exists. The only way you can ensure that yours correctly conveys your truth is to take an active interest in building it. You’ll find plenty of advice on how to build your brand if you just care to look.

Effective personal branding can help to reduce confusion and ensure that you show your target market who you are, which helps build a relationship of trust and confidence in your skills. Personal branding is your way of differentiating your skill set from that of your competitors. Personal branding is about communicating your values, goals, and beliefs.

Your brand must be consistent across social media platforms and should show your best attributes. These days, it is rare for an employer not to visit your sites before settling on a candidate so let those sites speak for you.

Jeff Bezos described the personal brand as “what people say about you when you’re not in the room.” So, put your best foot forward and ensure that you’re seen in the best possible light by actively building your brand.


There are many benefits of a powerful personal brand, from possessing a greater influence and credibility within your community, to having a constant stream of customers and business opportunities.

But just how does one become so powerful? Take a look at this infographic from Feldman Creative which gives you a comprehensive A to Z guide to building your personal brand.

Do you trade as yourself rather than as a company? Are you looking to build your personal brand rather than an organisations?

Whilst there are many similarities between marketing yourself and a company there are also many things that should be approached differently.

For help building your personal brand take a look at this infographic from Referral Candy which gives you 30 tips direct from the experts. 


View enlarged infographics













LinkedIn Stats and Nostalgia 02---14

$
0
0

If you are an old timer like me on social media and a social media bluff especially a LinkedIn hyper member like me, you will enjoy this nostalgia from past (2013). 

When you talk of LinkedIn, it is always with respect and or even reverence you talk about it. When I joined LinkedIn in 2008, it was just around 33 million members and Indian count around 10% approximately 3 million members

That was the time When Jeff Weiner took over as the CEO.  The transformation in LinkedIn in all too familiar and wellknown to merit a repetition here. The next evolution happened in june 2020, when Jeff Weiner evolveved from his position  of CEO to Executive Chairman. 

Ryan Roslansky, who took over in June 20200, has maintained seamless transition. The seven months period from then doesn't give you an of any leadership change at LinkedIn. 

LinkedIn growth from 2008 to the present gives me a feeling of a child growing up in my presence. The rearing process of LinkedIn from 2008 gives a feeling of joy and possessiveness , the same, that I would attach to my children.

I can proudly say that, I learned social media posting etiquettes and disciplines at LinkedIn.

Please go through the interesting infographics.

Thank you.


The infographic below is from nostalgic collection from 2013.

Posted 

Founded in 2003 Linkedin has over 706 million members, 50 million listed companies and is available in 24 languages with members in 200 countries. 43 percent of LinkedIn users are female and 57 percent are male. It is the largest professional network.

Top 45 countries by number of Linkedin members

RankCountryMembers
1United States171,000,000+
2India69,000,000+
3China51,000,000+
4Brazil45,000,000+
5United Kingdom29,000,000+
6France20,000,000+
7Canada17,000,000+
8Indonesia16,000,000+
9Mexico15,000,000+
10Italy14,000,000+
11Spain13,000,000+
12Australia11,000,000+
13Germany10,400,000+
14Turkey9,000,000+
15Netherlands8,330,000+
16The Philippines8,000,000+
17Colombia8,000,000+
18Argentina7,260,000+
19South Africa7,000,000+
20Chile5,000,000+
21Malaysia4,470,000+ 
22Nigeria3,910,000+
23UAE3,710,000+
24Egypt3,550,000+
25Belgium3,500,000+
26Sweden3,430,000+
27Saudia Arabia3,360,000+
28Poland3,170,000+
29Portugal2,910,000+
30Switzerland2,630,000+
31South Korea2,370,000+
32Denmark2,300,000+
33Romania2,280,000+
34Singapore2,260,000+
35Japan2,140,000+
36Taiwan2,050,000+
37Ireland1,740,000+
38Kenya1,740,000+
39New Zealand1,700,000+
40Israel1,670,000+
41Norway1,660,000+
42Hong Kong1,530,000+
43Czech Republic1,330,000+
44Austria1,160,000+
45Finland1,030,000+
User data was calculated using Linkedin search data and Linkedin official resources on about page

Summary of Linked Statistics 2020

  1. Over 706 million total members worldwide
  2. Over 72% of total members are outside of the United States.
  3. 55 job applications submitted every second
  4. 7 seconds between every Linkedin hire
  5. 50 million companies are listed on Linkedin
  6. 189 million+ members in North America
  7. 160 million+ members in Europe
  8. 190 million+ members in Asia
  9. 104 million+ members in Latin America
  10. 61 million+ members in the Middle East and Africa
  11. 20 million open jobs listed
  12. 90k schools listed
  13. Linkedin has over 33 offices and 16,000 employees
  14. Over 11 million+ C-level exec members globally
  15. 5.4 million+ small business owners members globally
  16. IT decision-makers 6.6 million+ members globally

The search trend for Linkedin has more than doubled over the past 10 years according to Google trends.

Linkedin Financials and Info

1. Microsoft paid 26.2 Billion at $196 per share for Linkedin in June 2016.

2. Linkedin has had a 20% year over year growth in FY20.

3. Linkedin generated over 6.7 billion dollars in revenue in 2019

4. Linkedin has tiered pricing for its plans

  • Premium Career $29.99 minimum per month
  • Premium Business $47.99 minimum per month
  • Sales Navigator $64.99 minimum per month
  • Recruiter Lite $99.95 minimum per month

Linkedin Demographic Statistics

1. More than 72% of Linkedin users are from outside the United States.

Outside US Linkedin Members
72%

2. Over 46 million students and recent graduates are on Linkedin

3. According to Statistica43% of Linkedin users are female and 57% are male

4. A breakdown of Internet users in the United States who use LinkedIn based on age by Statistica

  • 15-25 years old: 16%
  • 26-35 years old: 27%
  • 36-45 years old: 34%
  • 46-55 years old: 37%
  • 56+ years old: 29%

5. 29% of males online and 24% of females online use Linkedin according to the Pew research center.

6. 51% of college graduates use Linkedin which is an increase from previous years.

7. Recent data suggests 11% of people 65 or older use Linkedin

Linkedin Content Marketing Statistics and Info

1. There are currently over 50 million companies listed on Linkedin making it a great source for generating leads.

2. Over 92% of B2B marketers use LinkedIn to distribute content making it a top source for lead generation.

3. Linkedin has 15x more content impressions than job postings 57% of those are mobile making it a digital marketing powerhouse.

4. Linkedin has 9 billion content impressions and over 280 billion feed views annually in its userbase. This makes it a key for creating brand awareness for most companies.

5. Content shared across LinkedIn was up nearly 50% year-over-year in June 2020. Companies can also share content directly from their company page on Linkedin.

6. Linkedin had a 4X Year-over-year increase in Learning hours watched in June 2020 with an 89% increase in live streams sinch March 2020 making it a great source for lead generation.

7. 46% of social media traffic to your company site comes from LinkedIn.

8. 80% of B2B marketing leads from social media come through LinkedIn.

9. Linkedin has over 61 million senior-level opinion leaders and 40 Million in decision making positions making it a top spot for generating leads.

10. More than 1 million people are publishing articles on Linkedin as influencers creating quality content.

11. Article titles between 40-49 characters long perform best according to Okdork who analyzed 3000 blog posts

12. Having 8 images in your post is correlated with the most likes, views, shares, and comments.

13. How-to articles and list posts tend to perform the best according to Okdork they also make a point to avoiding creating question posts.

14. 2,000 words plus long-form content performs the best on Linkedin.

15. Content for articles should be written at an 11th grade level with a neutral tone.

16. Thursday is the best day of the week to post your article for the maximum number of views.

17. LinkedIn has stated that page updates which include links may see up to a 45% higher follower engagement compared to updates without links.

18. Members of Linkedin are 20x more likely to share a video on LinkedIn than any other post type

19. A Hubspot study reveals LinkedIn generates close to 3 times more conversions than Facebook or Twitter at 277%.

Linkedin Usage Statistics

1. 99.62% of Linkedin traffic is organic according to Similar web

2. More than 90% of recruiters use Linkedin weekly.

3. Linkedin internal research shows having a profile picture on your personal Linkedin page increases views 14x over profiles with no picture.

4. Over 2 Million small businesses use Linkedin to hire employees

5. Linkedin Traffic by source

  • 71.29% is direct traffic
  • 22.80% is from search
  • 2.51% is from email
  • 2.11% is from referral sites
  • 1.18% is from social
  • 0.10% is from display advertising
Source: Similar Web

6. According to Alexa, LinkedIn users spend an average of 10:35 minutes on the site each day with 8.46 Daily Pageviews per visit, and it ranks #58 in global internet traffic and engagement.

7. Data from Ahrefs shows Linkedin has 19,868,318,606 backlinks and 11,458,407 referring domains with a domain rating of 98. When creating a marketing strategy Linkedin should be near the top of the list.

8. Moz database shows that Linkedin ranks for over 15.7 million keywords in Google. Making it a top destination for marketers creating business-related quality content.

Linkedin Connection Statistics

According to Linkedin’s social media manager guide employees on average have 10 times the connections as their company has followers on LinkedIn.

Data from Linkedin shows the following most connected countries, industries, and job functions by an average number of connections.

The top 5 most connected countries are:

  • UAE with 211 average connections
  • The Netherlands with 188 average connections
  • Singapore with 152 average connections
  • The United Kingdom with 144 average connections
  • Denmark with 143 average connections

Top 5 most connected industries are:

  • Staffing and recruiting with 702 average connections
  • Venture capital and private equity with 423 average connections
  • Human resources with 380 average connections
  • Management consulting with 304 average connections
  • Online media with 303 average connections

Top 5 most connected job functions are:

  • Human resources with 415 average connections
  • Product management with 324 average connections
  • Business development with 283 average connections
  • Marketing with 272 average connections
  • Consulting with 244 average connections

Linkedin Popularity Compared to Other Social Networks

Recent data shows the following percentages of adults who use social media while Linkedin is the top site in the business world it still lags behind other social media platforms.

  • Youtube at 73%
  • Facebook at 69%
  • Instagram at 37%
  • Pinterest at 28%
  • Linkedin at 27%
  • Twitter at 22%
[ data source ]

Additional data shows the demographics usage between Linkedin, Facebook, and Instagram. Linkedin is used slightly more by men worldwide than women though the gap is small.

Members in the range of 30-49 years of age are the largest demographic that use Linkedin.

% of U.S. adults who use each social media platformFacebookInstagramLinkedin
Total69%37%27%
Men63%31%29%
Women75%43%24%
Ages 18-2979%67%28%
30-4979%47%37%
50-6468%23%24%
65+46%8%11%
White70%33%28%
Black70%40%24%
Hispanic69%51%16%
High school or less61%33%9%
Some college75%37%26%
College graduate74%43%51%
Urban73%46%33%
Suburban69%35%30%
Rural66%21%10%
data source ]
[ data source ]

Linkedin vs Xing

Xing has more than 18 million members which is much smaller than Linkedin’s userbase of 706 million. However, Xing’s impact on the European market more prominent. Xing has over 18 million users in the DACH (Germany, Austria, and Switzerland) area while LinkedIn currently has only 15 million.

Linkedin continues to see year over year user growth and with no strong competitors insight, it will continue that trend for years to come.

Reasons You Need More Chocolate In Your Life 02-15

$
0
0

Do you prefer dark chocolate or milk chocolate? If you answered dark, then happy to know that dark chocolate has numerous health benefits! There are many nutrients found in dark chocolate that helps the functions of your body and brain.

The higher the percentage of cocoa, the better nutritionally your chocolate is–and the more bitter it is. That bitterness is because of all the nutrients that are jam-packed into that chocolate bar to feed your body and soul. Some of the neat things about dark chocolate are the helpful bonuses it has in it. When you eat dark chocolate, you're providing your body with some antioxidants, fiber, iron, and a lot more great nutrients that help your body function. Dark chocolate can assist with skin cell growth, meaning the components in it can help your skin heal and glow. 

Dark chocolate also can help with your brain function. It can help improve memory, focus, and learning. It can also relieve stress and make you feel happier. Not only that, but dark chocolate has been proven to help blood flow to the brain.

So next time you want to treat yourself, pick up a bar of dark chocolate to have a snack, and also feel good about it!  

 enlarged

View enlarged Infographics



Please view at the original source 

Viewing all 1643 articles
Browse latest View live




Latest Images