13. November 2015
Source: The Guardian
Analysts warn that automation is now affecting mental labour as well as physical. So what tasks are vulnerable?
Fear of mass unemployment has been proved wrong as automation makes the economy stronger
The fear that robots will destroy jobs and leave a great mass of people languishing in unemployment is almost as old as automation itself. And yet, from the Luddites onwards, the fears have been eventually proved wrong, and the economy has ended up stronger than before.
But more and more analysts worry that this may be about to change. And on Thursday the Bank of England’s chief economist warned that this wave of automation is threatening skilled roles. The jobs of the middle classes, with their expensive university educations, are now at risk. As a result, a huge number of jobs that were previously thought safe from machine-led disruption are firmly in the firing line. Read the rest of this entry »
11. October 2015
Source: Fortune january 2015, Ram Charan
Get ready for the most sweeping business change since the Industrial Revolution.
The single greatest instrument of change in today’s business world, and the one that is creating major uncertainties for an ever-growing universe of companies, is the advancement of mathematical algorithms and their related sophisticated software. Never before has so much artificial mental power been available to so many—power to deconstruct and predict patterns and changes in everything from consumer behavior to the maintenance requirements and operating lifetimes of industrial machinery. In combination with other technological factors—including broadband mobility, sensors, and vastly increased data-crunching capacity—algorithms are dramatically changing both the structure of the global economy and the nature of business.
Though still in its infancy, the use of algorithms has already become an engine of creative destruction in the business world, fracturing time-tested business models and implementing dazzling new ones. The effects are most visible so far in retailing, creating new and highly interactive relationships between businesses and their customers, and making it possible for giant corporations to deal with customers as individuals. At Macy’s, for instance, algorithmic technology is helping fuse the online and the in-store experience, enabling a shopper to compare clothes online, try something on at the store, order it online, and return it in person. Algorithms help determine whether to pull inventory from a fulfillment center or a nearby store, while location-based technologies let companies target offers to specific consumers while they are shopping in stores. Read the rest of this entry »
21. September 2015
Source: The Wall Street Journal
Computers govern how long the microwave heats food or the dryer spins clothes.
Can they learn to form ideas and theories about the world around them as well?
In a particularly memorable episode of CBS’s “The Big Bang Theory,” physicist Sheldon Cooper and neurobiologist Amy Farrah Fowler get into an argument, a game of intellectual one-upmanship that threatens their relationship. Sheldon claims that “a grand unified theory, insofar as it explains everything, will ipso facto explain neurobiology.” Amy counters: “Yes, but if I’m successful, I will be able to map and reproduce your thought process in deriving a grand unified theory and therefore subsume your conclusions under my paradigm.”
The first contention is a familiar one—the second more surprising. But could it be true? Pedro Domingos, a computer scientist at the University of Washington, believes that a version of Amy’s notion is indeed true. All knowledge could be reproduced—and new knowledge produced—by “subsuming” human thought processes. And he thinks computer scientists are well on their way to doing it. Read the rest of this entry »
31. August 2015
Source: Fast Company
GIVEN THE VAST AMOUNTS OF DATA GOOGLE HAS ON US THROUGH OUR SEARCHES, IT’S A WONDER THEY HAVEN’T DONE THIS SOONER.
It’s been the subject of a feature film, a main theme of a best-selling book, a source of endless speculation and analysis (yielding 21 million results on the search “how google hires”), and a holy grail-like quest for some two million hopefuls per year.
It’s the hiring process at Google.
While the search giant has been known to deploy quirky recruitment tactics, from banners and billboards blazed with a mathematical riddle aimed to entice engineers or the brainteasers about golf balls or school buses. The latter tactics, admitted Google’s head of people operations, Laszlo Bock, were “a complete waste of time,” while the former didn’t net the company any new hires. Read the rest of this entry »
24. January 2015
New technology tools are making adoption by the front line much easier, and that’s accelerating the organizational adaptation needed to produce results.
The world has become excited about big data and advanced analytics not just because the data are big but also because the potential for impact is big. Our colleagues at the McKinsey Global Institute (MGI) caught many people’s attention several years ago when they estimated that retailers exploiting data analytics at scale across their organizations could increase their operating margins by more than 60 percent and that the US healthcare sector could reduce costs by 8 percent through data-analytics efficiency and quality improvements.1
Unfortunately, achieving the level of impact MGI foresaw has proved difficult. True, there are successful examples of companies such as Amazon and Google, where data analytics is a foundation of the enterprise.2 But for most legacy companies, data-analytics success has been limited to a few tests or to narrow slices of the business. Very few have achieved what we would call “big impact through big data,” or impact at scale. For example, we recently assembled a group of analytics leaders from major companies that are quite committed to realizing the potential of big data and advanced analytics. When we asked them what degree of revenue or cost improvement they had achieved through the use of these techniques, three-quarters said it was less than 1 percent. Read the rest of this entry »
11. October 2014
Source: Project Syndicate
Nathan Eagle is the CEO of Jana, a World Economic Forum Technology Pioneer.
BOSTON – Nearly everyone has a digital footprint – the trail of so-called “passive data” that is produced when you engage in any online interaction, such as with branded content on social media, or perform any digital transaction, like purchasing something with a credit card. A few seconds ago, you may have generated passive data by clicking on a link to read this article.
Passive data, as the name suggests, are not generated consciously; they are by-products of our everyday technological existence. As a result, this information – and its intrinsic monetary value – often goes unnoticed by Internet users.
But the potential of passive data is not lost on companies. They recognize that such information, like a raw material, can be mined and used in many different ways. For example, by analyzing users’ browser history, firms can predict what kinds of advertisements they might respond to or what kinds of products they are likely to purchase. Even health-care organizations are getting in on the action, using a community’s purchasing patterns to predict, say, an influenza outbreak.
Indeed, an entire industry of businesses – which operate rather euphemistically as “data-management platforms” – now captures individual users’ passive data and extracts hundreds of billions of dollars from it. According to the Data-Driven Marketing Institute, the data-mining industry generated $156 billion in revenue in 2012 – roughly $60 for each of the world’s 2.5 billion Internet users. Read the rest of this entry »
9. October 2014
Source: Technology Review
If you’ve ever struggled to make sense of an information firehose, perhaps a 3-D printed model could help.
One of the characteristics of our increasingly information-driven lives is the huge amounts of data being generated about everything from sporting activities and Twitter comments to genetic patterns and disease predictions. These information firehoses are generally known as “big data,” and with them come the grand challenge of making sense of the material they produce.
That’s no small task. The Twitter stream alone produces some 500 million tweets a day. This has to be filtered, analyzed for interesting trends, and then displayed in a way that humans can make sense of quickly.
It is this last task of data display that Zachary Weber and Vijay Gadepally have taken on at MIT’s Lincoln Laboratory in Lexington, Massachusetts. They say that combining big data with 3-D printing can dramatically improve the way people consume and understand data on a massive scale.
They make their argument using the example of a 3-D printed model of the MIT campus, which they created using a laser ranging device to measure the buildings. They used this data to build a 3-D model of the campus which they printed out in translucent plastic using standard 3-D printing techniques.
One advantage of the translucent plastic is that it can be illuminated from beneath with different colors. Indeed, the team used a projector connected to a laptop computer to beam an image on the model from below. The image above shows the campus colored according to the height of the buildings.
But that’s only the beginning of what they say is possible. To demonstrate, Weber and Gadepally filtered a portion of the Twitter stream to pick out tweets that were geolocated at the MIT campus. They can then use their model to show what kind of content is being generated in different locations on the campus and allow users to cut and dice the data using an interactive screen. “Other demonstrations may include animating twitter traffic volume as a function of time and space to provide insight into campus patterns or life,” they say.
Read the rest of this entry »
1. July 2014
It began as a nagging technical problem that needed solving. Now, it’s driving a market that’s expected to be worth $50.2 billion by 2020.
There are countless open source projects with crazy names in the software world today, but the vast majority of them never make it onto enterprises’ collective radar. Hadoop is an exception of pachydermic proportions.
Named after a child’s toy elephant, Hadoop is now powering big data applications at companies such as Yahoo and Facebook; more than half of the Fortune 50 use it, providers say.
The software’s “refreshingly unique approach to data management is transforming how companies store, process, analyze and share big data,” according to Forrester analyst Mike Gualtieri. “Forrester believes that Hadoop will become must-have infrastructure for large enterprises.”
Globally, the Hadoop market was valued at $1.5 billion in 2012; by 2020, it is expected to reach $50.2 billion.
It’s not often a grassroots open source project becomes a de facto standard in industry. So how did it happen?
‘A market that was in desperate need’
“Hadoop was a happy coincidence of a fundamentally differentiated technology, a permissively licensed open source codebase and a market that was in desperate need of a solution for exploding volumes of data,” said RedMonk cofounder and principal analyst Stephen O’Grady. “Its success in that respect is no surprise.” Read the rest of this entry »
5. June 2014
May 13, 2014
It’s no secret that big data offers enormous potential for businesses. Every C-suite on the planet understands the promise. Less understood—and much less put into practice—are the steps that companies must take in order to realize that potential. For all their justifiable enthusiasm about big data, too many businesses risk leaving its vast potential on the table—or, worse, ceding it to competitors.
Big data has brought game-changing shifts to the way data is acquired, analyzed, stored, and used. Solutions can be more flexible, more scalable, and more cost-effective than ever before. Instead of building one-off systems designed to address specific problems for specific business units, companies can create a common platform leveraged in different ways by different parts of the business. And all kinds of data—structured and unstructured, internal and external—can be incorporated.
Yet big data also requires a great deal of change. Businesses will have to rethink how they access and safeguard information, how they interact with consumers holding vital data, how they leverage new skills and technologies. They’ll have to embrace new partnerships, new organization structures, and even new mind-sets. For many companies, the challenge of big data will seem as outsized as the payoff. But it doesn’t have to be.
In engagements with clients of The Boston Consulting Group, we’ve found it helpful to break down big data into three core components: data usage, the data engine, and the data ecosystem. For each of these areas, two key capabilities have proved essential. (See Exhibit 1.) By developing the resulting six capabilities, today’s businesses can put in place a solid framework for enabling—and succeeding with—big data:
- Data Usage: Identifying Opportunities and Building Trust. Companies must create a culture that encourages experimentation and supports a data-driven ideation process. They need to focus on trust, too—not just building it with consumers but wielding it as a competitive weapon. Businesses that use data in transparent and responsible ways will ultimately have more access to more information than businesses that don’t.
- The Data Engine: Laying the Technical Foundation and Shaping the Organization. Technical platforms that are fast, scalable, and flexible enough to handle different types of applications are critical. So, too, are the skill sets required to build and manage them. In general, these new platforms will prove remarkably cost-effective, using commodity hardware and leveraging cloud-based and open-source technologies. But their all-purpose nature means that they will often be located outside individual business units. It’s crucial, therefore, to link them back to those businesses and their goals, priorities, and expertise. Companies will also need to put the insights they gain from big data to use—embedding them in operational processes, in or near real time.
- The Data Ecosystem: Participating in a Big-Data Ecosystem and Making Relationships Work. Big data is creating opportunities that are often outside a company’s traditional business or markets. Partnerships will be increasingly necessary to obtain required data, expertise, capabilities, or customers. Businesses must be able to identify the right relationships—and successfully maintain them.
In a world where information moves fast, businesses that are quick to see, and pursue, the new ways to work with data are the ones that will get ahead and stay ahead. The following six capabilities will help get them there.
Read the rest of this entry »
21. April 2014
Source: The Economist
“BOLLOCKS”, says a Cambridge professor. “Hubris,” write researchers at Harvard. “Big data is bullshit,” proclaims Obama’s reelection chief number-cruncher. A few years ago almost no one had heard of “big data”. Today it’s hard to avoid—and as a result, the digerati love to condemn it. Wired, Time, Harvard Business Review and other publications are falling over themselves to dance on its grave. “Big data: are we making a big mistake?,” asks the Financial Times. “Eight (No, Nine!) Problems with Big Data,” says the New York Times. What explains the big-data backlash?
Big data refers to the idea that society can do things with a large body of data that that weren’t possible when working with smaller amounts. The term was originally applied a decade ago to massive datasets from astrophysics, genomics and internet search engines, and to machine-learning systems (for voice-recognition and translation, for example) that only work well when given lots of data to chew on. Now it refers to the application of data-analysis and statistics in new areas, from retailing to human resources. The backlash began in mid-March, prompted by an article in Science by David Lazer and others at Harvard and Northeastern University. It showed that a big-data poster-child—Google Flu Trends, a 2009 project which identified flu outbreaks from search queries alone—had overestimated the number of cases for four years running, compared with reported data from the Centres for Disease Control (CDC). This led to a wider attack on the idea of big data. Read the rest of this entry »