Hiring in the Age of Big Data

28. October 2013

Date: 27-10-2013
Source: BusinessWeek

Wasabi Waiter looks a lot like hundreds of other simple online games. Players acting as sushi servers track the moods of their customers, deliver them dishes that correspond to those emotions, and clear plates while tending to incoming patrons. Unlike most games, though, Wasabi Waiter analyzes every millisecond of player behavior, measuring conscientiousness, emotion recognition, and other attributes that academic studies show correlate with job performance. The game, designed by startup Knack.it, then scores each player’s likelihood of becoming an outstanding employee.

Knack is one of a handful of startups adapting big data metrics to hiring. The companies are pitching online games and questionnaires to corporate recruiters frustrated by the disconnect between a good interview and an ideal employee. Based on records of how star workers responded to the same tests, these services predict whether a candidate will be suited for a particular job. Clients use the tool to help winnow piles of applications. “People are our biggest resource, and right now a lot of them are mismatched,” says Erik Brynjolfsson, an adviser to Knack and director of the Center for Digital Business at the Massachusetts Institute of Technology Sloan School of Management. Read the rest of this entry »

More Businesses Want Workers With Math or Science Degrees

21. October 2013

Date: 21-10-2013
Source: The Wall Street Journal

Globalfoundries is seeking more applicants with skills in the STEM fields, and it has embarked on a crash program to train future workers. Shown, a chip wafer at the company’s plant in Malta, N.Y. Christian Science Monitor/Getty

MALTA, N.Y.—New York state got an influx of high-tech jobs five years ago when its offer of more than $1 billion of incentives, including cash and tax breaks, persuaded Globalfoundries Inc. to set up a semiconductor plant near Saratoga Lake in this town 25 miles north of Albany.

There has been one hitch: Because it is hard to find enough people with the right technical skills around here, about half of the 2,200 jobs at the plant were filled by people brought in from outside New York, and 11% are foreigners.

In terms of basic math and science skills, “we’re really floundering here in the U.S.,” Mike Russo, Globalfoundries’ director of government relations, said in an interview.

The company has embarked on a crash program with nearby school districts, the State University of New York and other partners to train future workers.

Globalfoundries says average annual base salaries at the plant range from $30,000 for production operators to $90,000 for engineers.

The shortage of highly skilled factory workers in Malta comes amid growing worries about a nationwide failure to produce enough strong graduates in science, technology, engineering and math, the so-called STEM fields. Read the rest of this entry »

The New Science of Who Sits Where at Work

9. October 2013

Date: 09-10-2013
Source: The Wall Street Journal

Companies Try to Boost Productivity by Micromanaging Seating Arrangements

Office workers are being treated to a new game: musical chairs.

By shifting employees from desk to desk every few months, scattering those who do the same types of jobs and rethinking which departments to place side by side, companies say they can increase productivity and collaboration.

Proponents say such experiments not only come with a low price tag, but they can help a company’s bottom line, even if they leave a few disgruntled workers in their wake.

In recent years, many companies have moved toward open floor plans and unassigned seating, ushering managers out of their offices and clustering workers at communal tables. But some companies—especially small startups and technology businesses—are taking the trend a step further, micromanaging who sits next to whom in an attempt to get more from their employees.

“If I change the [organizational] chart and you stay in the same seat, it doesn’t have very much of an effect,” says Ben Waber, chief executive of Sociometric Solutions, a Boston company that uses sensors to analyze communication patterns in the workplace. “If I keep the org chart the same but change where you sit, it is going to massively change everything.”

Mr. Waber says a worker’s immediate neighbors account for 40% to 60% of every interaction that worker has during the workday, from face-to-face chats to email messages. There is only a 5% to 10% chance employees are interacting with someone two rows away, according to his data, which is culled from companies in the retail, pharmaceutical and finance industries, among others. Read the rest of this entry »

The Big Data Conundrum: How to Define It?

4. October 2013

Date: 04-10-2013
Source: Technology Review

Big Data is revolutionizing 21st-century business without anybody knowing what it actually means. Now computer scientists have come up with a definition they hope everyone can agree on.

One of the biggest new ideas in computing is “big data.” There is unanimous agreement that big data is revolutionizing commerce in the 21st century. When it comes to business, big data offers unprecedented insight, improved decision-making, and untapped sources of profit.

And yet ask a chief technology officer to define big data and he or she will will stare at the floor. Chances are, you will get as many definitions as the number of people you ask. And that’s a problem for anyone attempting to buy or sell or use big data services—what exactly is on offer?

Today, Jonathan Stuart Ward and Adam Barker at the University of St Andrews in Scotland take the issue in hand. These guys survey the various definitions offered by the world’s biggest and most influential high-tech organisations. They then attempt to distill from all this noise a definition that everyone can agree on.

Ward and Barker cast their net far and wide but the results are mixed.Formal definitions are hard to come by with many organisations preferring to give anecdotal examples.

In particular, the notion of “big” is tricky to pin down, not least because a data set that seems large today will almost certainly seem small in the not-too-distant future. Where one organizsation gives hard figures for what constitutes “big,” another gives a relative definition, implying that big data will always be more than conventional techniques can handle.

Some organizations point out that large data sets are not always complex and small data sets are always simple. Their point is that the complexity of a data set is an important factor in deciding whether it is “big.”

Here is a summary of the kind of descriptions Ward and Barker discovered from various influential organizations:

1. Gartner. In 2001, a Meta (now Gartner) report noted the increasing size of data, the increasing rate at which it is produced and the increasing range of formats and representations employed. This report predated the term “dig data” but proposed a three-fold definition encompassing the “three Vs”: Volume, Velocity and Variety.This idea has since become popular and sometimes includes a fourth V: veracity, to cover questions of trust and uncertainty.

2. Oracle. Big data is the derivation of value from traditional relational database-driven business decision making, augmented with new sources of unstructured data.

3. Intel. Big data opportunities emerge in organizations generating a median of 300 terabytes of data a week. The most common forms of data analyzed in this way are business transactions stored in relational databases, followed by documents, e-mail, sensor data, blogs, and social media.

4. Microsoft. “Big data is the term increasingly used to describe the process of applying serious computing power—the latest in machine learning and artificial intelligence—to seriously massive and often highly complex sets of information.”

5. The Method for an Integrated Knowledge Environment open-source project. The MIKE project argues that big data is not a function of the size of a data set but its complexity. Consequently, it is the high degree of permutations and interactions within a data set that defines big data.

6. The National Institute of Standards and Technology. NIST argues that big data is data which “exceed(s) the capacity or capability of current or conventional methods and systems.” In other words, the notion of “big” is relative to the current standard of computation.

A mixed bag if ever there was one.

In addition to the search for definitions, Ward and Barker attempted to better understand the way people use the phrase big data by searching Google Trends to see what words are most commonly associated with it. They say these are: data analytics, Hadoop, NoSQL, Google, IBM, and Oracle.

These guys bravely finish their survey with a definition of their own in which they attempt to bring together these disparate ideas. Here’s their defintion:

“Big data is a term describing the storage and analysis of large and or complex data sets using a series of techniques including, but not limited to: NoSQL, MapReduce and machine learning.”

A game attempt at a worthy goal—a definition that everyone can agree is certainly overdue.