Articles on this Page
- 08/13/13--18:57: _Valuing Influential...
- 08/13/13--19:14: _How Making Robots C...
- 08/13/13--19:25: _Snakes on the brain...
- 08/13/13--19:34: _PROJECT-BASED LEARN...
- 08/15/13--04:00: _Organs-on-chips eva...
- 08/15/13--04:10: _Are VCs losing inte...
- 08/15/13--06:32: _The idea maze 08-15
- 08/15/13--06:48: _The bubble around t...
- 08/15/13--07:01: _THE SHOCKING STATS ...
- 08/15/13--10:31: _Sustainability? Don...
- 08/15/13--22:35: _Indian Transnationa...
- 08/15/13--23:08: _Consumer demand for...
- 08/16/13--00:16: _‘Hybrid’ Organizati...
- 08/16/13--00:33: _Start with yes: Sur...
- 08/16/13--06:00: _The New CTO: Chief ...
- 08/16/13--06:10: _Procedural Versus S...
- 08/16/13--06:18: _How to be an irresi...
- 08/16/13--19:56: _Career in Developme...
- 08/17/13--08:14: _Twitter sets new tw...
- 08/17/13--20:26: _Great Leaders Who M...
- 08/13/13--18:57: Valuing Influentials Means More than Just Counting Connections 08-14
- 08/13/13--19:14: How Making Robots Captivates Kids' Imaginations 08-14
- 08/13/13--19:25: Snakes on the brain 08-14
- 08/13/13--19:34: PROJECT-BASED LEARNING From Coverage to "Uncoverage!" 08-14
- 08/15/13--04:10: Are VCs losing interest in social? This study says yes 08-15
- 08/15/13--06:32: The idea maze 08-15
- 08/15/13--10:31: Sustainability? Don’t Go It Alone 08-15
- Consumer demand in China continued to show strong growth, totalling 276t in the second quarter, a rise of 87% compared to the same quarter last year, as investors used the lower gold price to buy in advance of expected future price rises. Jewellery demand in the quarter was 153t, up 54% on the same quarter last year, while bar and coin investment was 123t, up 157% on Q2 2012.
- Consumers in India also showed continued strong appetite for gold, with recent government measures to curb demand having had little impact on the quarter’s figures. Consumer demand was 310t, up 71% on last year. Bar and coin investment rose 116%, while jewellery demand rose by 51%.
- Bar and coin investment globally totalled 508t, a record figure, and a rise of 78% on the same quarter last year.
- Central banks remained committed to gold. Although demand of 71t in Q2 2013 was below the record quarterly figure of 165t purchased the previous year, central banks have now been purchasers of gold for ten consecutive quarters.
- There was a net outflow of 402t from ETFs in the quarter. This was more than counterbalanced by inflows into other forms of investment, such as the record 508t in bars and coins.
- Second quarter gold demand of 856t (US$39bn) was down 12% compared with Q2 2012.
- Demand for jewellery was 576t (US$26.2bn) in the quarter, up 37% on last year. This was the highest figure since Q3 2008, and the highest second quarter figure since Q2 2007.
- The net outflow from ETFs was 402t (-US$18.3bn). However that was more than compensated by bar and coin investment, which saw inflows of 508t (US$23.1bn). Total investment demand, including OTC investment, totalled 257t (US$11.7bn).
- Net central bank purchases totalled 71t (US$3.2bn), 57% down on what was a record-breaking quarter a year ago. Central banks have now been net purchasers of gold for ten consecutive quarters.
- Demand in the technology sector was stable once again, totalling 104t, a rise of 1% on last year.
- Mine production in the quarter was 4% higher than a year ago, at 732t. Recycling fell 21%, leading to a total supply that was 6% lower than a year ago.
- 08/16/13--00:16: ‘Hybrid’ Organizations a Difficult Bet for Entrepreneurs 08-16
- 08/16/13--00:33: Start with yes: Survivorship the LIVESTRONG way 08-16
- 08/16/13--06:00: The New CTO: Chief Transformation Officer 08-16
- 08/16/13--06:10: Procedural Versus Strategic Approaches to Social Media 08-16
- 08/16/13--06:18: How to be an irresistible leader 08-16
- 08/16/13--19:56: Career in Development Communications (Non-Profit Sector) 08-17
- We were running one of the world’s largest Ruby on Rails installations, and we had pushed it pretty far –– at the time, about 200 engineers were contributing to it and it had gotten Twitter through some explosive growth, both in terms of new users as well as the sheer amount of traffic that it was handling.
- This system was also monolithic where everything we did, from managing raw database and memcache connections through to rendering the site and presenting the public APIs, was in one codebase. Not only was it increasingly difficult for an engineer to be an expert in how it was put together, but also it was organizationally challenging for us to manage and parallelize our engineering team.
- We had reached the limit of throughput on our storage systems –– we were relying on a MySQL storage system that was temporally sharded and had a single master. That system was having trouble ingesting tweets at the rate that they were showing up, and we were operationally having to create new databases at an ever increasing rate. We were experiencing read and write hot spots throughout our databases.
- We were “throwing machines at the problem” instead of engineering thorough solutions –– our front-end Ruby machines were not handling the number of transactions per second that we thought was reasonable, given their horsepower. From previous experiences, we knew that those machines could do a lot more.
- Finally, from a software standpoint, we found ourselves pushed into an “optimization corner” where we had started to trade off readability and flexibility of the codebase for performance and efficiency.
- We wanted big infrastructure wins in performance, efficiency, and reliability –– we wanted to improve the median latency that users experience on Twitter as well as bring in the outliers to give a uniform experience to Twitter. We wanted to reduce the number of machines needed to run Twitter by 10x.
- We also wanted to isolate failures across our infrastructure to prevent large outages –– this is especially important as the number of machines we use go up, because it means that the chance of any single machine failing is higher. Failures are also inevitable, so we wanted to have them happen in a much more controllable manner.
- We wanted cleaner boundaries with “related” logic being in one place –– we felt the downsides of running our particular monolithic codebase, so we wanted to experiment with a loosely coupled services oriented model. Our goal was to encourage the best practices of encapsulation and modularity, but this time at the systems level rather than at the class, module, or package level.
- Most importantly, we wanted to launch features faster. We wanted to be able to run small and empowered engineering teams that could make local decisions and ship user-facing changes, independent of other teams.
- 08/17/13--20:26: Great Leaders Who Make the Mix Work, 08-18
Snakes on the brain
Using unusual example, HGSE's Steven Seidel shows how blending arts with joyful learning breeds successful teaching
Organs-on-chips evaluate therapies for lethal radiation exposure
Wyss Institute’s goal is to improve America’s ability to respond to nuclear radiation incidents
A team at the Wyss Institute for Biologically Inspired Engineering at Harvard University has received a $5.6 million grant from the United States Food and Drug Administration (FDA) to use its organs-on-chips technology to test human physiological responses to radiation and evaluate drugs designed to counter those effects. The effort will also be supported by a team in the vascular biology program at Children’s Hospital Boston.
THE SHOCKING STATS ABOUT WHO'S REALLY STARTING COMPANIES IN AMERICA
Sustainability? Don’t Go It Alone
At the recent Sustainable Brands conference, one message was clear: individual corporate sustainability efforts aren’t enough to halt climate change. The solution: collaborative partnerships — even between competitors.
Even several very large companies cannot, on their own, get us there. In fact, historically, no big environmental problem — from air and water pollution to acid rain or ozone depletion — has ever been solved by businesses volunteering to do the right thing.
View at the original source
Indian Transnationals Expected to Increase Their Global Footprint
Consumer demand for gold up 53% in Q2 2013 led by strong growth in China and India
‘Hybrid’ Organizations a Difficult Bet for Entrepreneurs
Hybrid organizations combine the social logic of a nonprofit with thecommercial logic of a for-profit business, but are very difficult to finance. So why would anyone want to form one? Julie Battilana and Matthew Leeinvestigate.
“IT’S MUCH HARDER TO GET STARTED AND BE SUCCESSFUL IF YOU DON'T FIT INTO A WELL-DEFINED FORM THAT PEOPLE UNDERSTAND.” —MATTHEW LEE
Start with yes: Survivorship the LIVESTRONG way
At the beginning of a cruise ship getaway with his family, Doug Ulman wanted to be anonymous. Ulman, the president and CEO of the LIVESTRONG Foundation, told everyone with him, “For this week, if anyone asks what I do, I’m a lawyer. I just want a week of vacation.”
The New CTO: Chief Transformation Officer
In a recent article, I suggested that the role of the CIO needs to shift from Chief Information Officer to a Chief Innovation Officer, due to the massive, rapid, multiple technology-driven transformations that are occurring today. And, just as the CIO's role needs to change, so too does the CTO's—from Chief Technology Officer to Chief Transformation Officer. This fundamental shift is necessary to elevate the position's contribution and relevance.
Procedural Versus Strategic Approaches to Social Media
By: Kesang Chungyalpa
Twitter sets new tweets per second record, explains why 143k simultaneous updates didn't make it stutter
New Tweets per second record, and how!
New Tweets per second (TPS) record: 143,199 TPS. Typical day: more than 500 million Tweets sent; average 5,700 TPS.Tweet
Starting to re-architect
The JVM vs the Ruby VM
First, we evaluated our front-end serving tier across three dimensions: CPU, RAM, and network. Our Ruby-based machinery was being pushed to the limit on the CPU and RAM dimensions –– but we weren’t serving that many requests per machine nor were we coming close to saturating our network bandwidth. Our Rails servers, at the time, had to be effectively single threaded and handle only one request at a time.
In Twitter’s Ruby systems, concurrency is managed at the process level: a single network request is queued up for a process to handle. That process is completely consumed until the network request is fulfilled. Adding to the complexity, architecturally, we were taking Twitter in the direction of having one service compose the responses of other services. Given that the Ruby process is single-threaded, Twitter’s “response time” would be additive and extremely sensitive to the variances in the back-end systems’ latencies. There were a few Ruby options that gave us concurrency; however, there wasn’t one standard way to do it across all the different VM options. The JVM had constructs and primitives that supported concurrency and would let us build a real concurrent programming platform.
The largest architectural change we made was to move from our monolithic Ruby application to one that is more services oriented. We focused first on creating Tweet, timeline, and user services –– our “core nouns”. This move afforded us cleaner abstraction boundaries and team-level ownership and independence. In our monolithic world, we either needed experts who understood the entire codebase or clear owners at the module or class level. Sadly, the codebase was getting too large to have global experts and, in practice, having clear owners at the module or class level wasn’t working.
Even if we broke apart our monolithic application into services, a huge bottleneck that remained was storage. Twitter, at the time, was storing tweets in a single master MySQL database. We had taken the strategy of storing data temporally –– each row in the database was a single tweet, we stored the tweets in order in the database, and when the database filled up we spun up another one and reconfigured the software to start populating the next database.
Observability and statistics
We’ve traded our fragile monolithic application for a more robust and encapsulated, but also complex, services oriented application. We had to invest in tools to make managing this beast possible. Given the speed with which we were creating new services, we needed to make it incredibly easy to gather data on how well each service was doing. By default, we wanted to make data-driven decisions, so we needed to make it trivial and frictionless to get that data.
// dispatch to do work
Finally, as we were putting this all together, we hit two seemingly unrelated snags: launches had to be coordinated across a series of different services, and we didn’t have a place to stage services that ran at “Twitter scale”. We could no longer rely on deployment as the vehicle to get new user-facing code out there, and coordination was going to be required across the application.
Twitter is more performant, efficient and reliable than ever before. We’ve sped up the site incredibly across the 50th (p50) through 99th (p99) percentile distributions and the number of machines involved in serving the site itself has been decreased anywhere from 5x-12x. Over the last six months, Twitter has flirted with four 9s of availability.