What can you learn about entrepreneurship after seven years at technology behemoth Microsoft? Quite a lot, says serial entrepreneur and former Microsoft senior executive Naveen Jain.
Jain, who was a project and group manager at Microsoft from 1989 to 1996, has founded three companies — Intelius, Moon Express and InfoSpace. He and his teams aren’t just creating fluffy apps or silly games. Intelius, founded in 2003, provides information services to consumers and businesses, including background checks and identity theft protection. InfoSpace, launched in 1996, develops meta-search engines that aggregate results from Google, Yahoo and Bing, and display them in one place. And perhaps most intriguing, Jain founded Moon Express last year with the goal to mine the moon for elements that are rare on earth.
Before founding his companies, Jain, now 52, managed development of some of Microsoft’s flagship products, including MS-DOS and early versions of Windows. He learned many of the lessons he has applied to his startups at Microsoft, where he worked closely with and observed founder Bill Gates in action.
Jain shared with us the top three lessons he learned from Gates about entrepreneurship and explained how they helped him after he founded his own companies.
1. Execute Flawlessly
At Microsoft, Jain says Gates’ early focus was to “out-execute” the company’s competitors. That strategy taught Jain that entrepreneurs should focus on bringing a product to market that’s better than all the other offerings on the block.
< !-- Copyright 2008 DoubleClick, a division of Google Inc. All rights reserved. --><!-- Code auto-generated on Fri Aug 19 05:23:16 EDT 2011 -->
“Being a successful entrepreneur is not about breakthrough innovation,” Jain says. “It’s about flawless execution.”
Before Microsoft Word, there was Word Perfect. Before Excel, there was Lotus 1-2-3, developed by Lotus Software, now part of IBM. Before MS-DOS, there was CP/M. In the end, Microsoft beat out those products in market share.
Related: Mark Cuban’s 12 Rules for Startups
With Intelius, Jain set out to mirror Microsoft’s achievement. When the company entered the information commerce market nine years ago, it faced about 100 other competitors, he says. Today, Jain says, “every company that used to be in business at that time, no longer exists.”
From the start, Intelius’ six cofounders focused on creating a superior product while the others concentrated on distribution and exclusive advertising relationships to bring users to the service. “It wasn’t that we had some great idea,” Jain says, “but we executed the existing idea well to become the market leader with $150 million in [annual] revenue.”
2. Hire People Who Are Unlike You
While at Microsoft, Jain noticed that Gates surrounded himself with people of diverse backgrounds. “In the early days, the reason for Microsoft’s success was a great vision by Bill Gates,” Jain says, “but at the same time, he had probably one of the best operations people: Jon Shirley.” Shirley served as president from 1983 to 1990.
Jain also considers Steve Ballmer, Microsoft’s current CEO, very different from Gates. “Bill Gates is the technical genius while Ballmer is the marketing guru,” Jain says. “They complemented each other’s style well.”
Intelius followed Gates’ lesson with six entrepreneurs of different backgrounds: three engineers, an operations expert, a product-development chief and Jain as CEO. “We have a tendency to like people who are like us,” Jain says. “But when you are running a company, you have to find people who are unlike you because you want people who are complementary to you.”
Related: What I Learned About Entrepreneurship from Richard Branson
3. Be Agile, But Persistent
Jain considers Gates considerably adept at deciding which product lines to continue and which to terminate. This, in turn, helps foster a steady but agile business culture.
For example, Microsoft has persisted with such flagship products as Windows, Word and Excel, even though not all of them were instant hits. “I worked on Windows 1.0, 2.0 and Windows/386, and it wasn’t until Windows 3.0 that Microsoft Windows actually caught on,” Jain says. Imagine what a mistake it would have been if Microsoft had stopped producing Windows at version 2.0.
But Microsoft is equally comfortable shutting down product lines that underperform, Jain says. For example, Microsoft launched a user interface called Microsoft Bob in 1995 that initially looked promising. It aimed to humanize computing with a game-like experience, but it flopped.
Related: Lessons Learned from Scary Business Mistakes
The key is to know when to improve a product that hasn’t quite hit the mark but be flexible enough to call it quits and devote resources to something more promising, Jain advises. For example, Intelius launched a product that aimed to provide data to private investigators. It attracted a significant number of customers, but the company decided the market was too small at $10 million to $20 million and canned the product.
“It doesn’t matter where you start from,” Jain says. “You constantly believe, persist, modify and change who you are until you find the right marketplace.”
Tuesday, January 31, 2012
Monday, January 30, 2012
UX needs to be off the grid
If you’ve used the mobile social network Path recently, it’s likely that you enjoyed the experience. Path has a sophisticated design, yet it’s easy to use. It sports an attractive red color scheme and the navigation is smooth as silk. It’s a social app and finding friends is easy thanks to Path’s suggestions and its connection to Facebook.
In short, Path has a great user experience. That isn’t the deciding factor on whether a tech product takes off. Ultimately it comes down to how many people use it and that’s particularly important for a social app like Path. Indeed it’s where Path may yet fail, but the point is they have given themselves a chance by creating a great user experience. In this post, we outline 5 signs that the tech product or app you’re using has a great UX - and therefore has a shot at being the Next Big Thing.
1. Elegant UI
A great user experience isn’t just about the user interface, but it helps a lot. While I’m not a regular Path user, today I opened it up and browsed for a bit. To like an item on Path, you click a little smiley icon in the top right. If you really, really like an item, you can make it a heart icon. There are three other options: a winky face, a surprised face and a sad face. So Path has cleverly created 5 different types of ‘like’ using subtle but obvious icons. This is something that Facebook hasn’t yet cracked; it only has one style of ‘like’ and many people have argued for a ‘dislike’ option, at the very least.
2. Addictive
A nice design is one thing, but you also need to see value in it. It must either solve a problem for you, or be a pleasurable distraction. Time and time again. In other words, it must be addictive. One of the current trendy services on the Web is Pinterest, an online pinboard that has become an addiction for many. In a text-heavy social Web, Pinterest has nailed the concept of a completely visual user experience. It solves a problem, because it gives you a place to store images around topics - such as the very popular wedding dresses section. It brings you back every day, if you get hooked.
3. Fast Start
The Kindle Fire as a product is not as aesthetically pleasing as the iPad 2. The Fire is rectangular and small, looking a bit like the iPad’s runty little brother. But what the Kindle Fire does far better than the iPad is get the user started - and hooked - straight out of the box. With the iPad, you need to connect to iTunes to get things started, which can often be a time consuming and awkward experience for newbies. [Update: a commenter noted that iTunes isn’t necessary with iOS5, although you will still need to manually set up your account.] But the Kindle Fire comes pre-loaded with your Amazon profile, which enables most users to start downloading content as soon as they switch the device on for the first time.
Note that the rest of the Kindle Fire’s user experience is not always pleasurable. But the start up is one part that is.
4. Seamless
With so many Internet-connected devices and screens nowadays, it’s important to have a consistent experience. One recent example of this for me is the online music app Rdio. It only just became available in my country, but I was immediately impressed by the consistent user interface between Rdio’s iPhone app and the desktop app on my computer. Rdio takes that seamlessness a step further though, in allowing you to download whole albums onto your mobile device so that you can listen to them offline. It would’ve been easy for Rdio to get that functionality wrong, for example by enabling download on 3G and giving you a huge cellphone bill. But by default, Rdio only downloads songs onto your mobile phone using WiFi (you can turn on 3G download if you think you can afford it). It’s the little details like that which make a great user experience.
5. It Changes You
Arguably the most outstanding tech products are ones that revolutionize the way we do things. The iPhone and iPad are two high profile examples from recent years. Twitter is another. These are products that create a brand new user experience, or change old habits in a good way.
When I asked for examples of a great user experience over on Google+, Chris Brogan commented that FitBit has changed the way he manages his fitness. “The information it gathers is useful,” said Chris, “plus the way it’s displayed to me challenges me to do more with it.”
Having an overall great user experience is difficult to pull off. Some of the products mentioned above only get part of it right, for example Kindle Fire and Path. I even said that the iPad, an otherwise glorious product, is slightly disappointing in the start up.
What products or apps have given you a great user experience recently? We’d love to hear about what’s making you happy.
In short, Path has a great user experience. That isn’t the deciding factor on whether a tech product takes off. Ultimately it comes down to how many people use it and that’s particularly important for a social app like Path. Indeed it’s where Path may yet fail, but the point is they have given themselves a chance by creating a great user experience. In this post, we outline 5 signs that the tech product or app you’re using has a great UX - and therefore has a shot at being the Next Big Thing.
1. Elegant UI
A great user experience isn’t just about the user interface, but it helps a lot. While I’m not a regular Path user, today I opened it up and browsed for a bit. To like an item on Path, you click a little smiley icon in the top right. If you really, really like an item, you can make it a heart icon. There are three other options: a winky face, a surprised face and a sad face. So Path has cleverly created 5 different types of ‘like’ using subtle but obvious icons. This is something that Facebook hasn’t yet cracked; it only has one style of ‘like’ and many people have argued for a ‘dislike’ option, at the very least.
2. Addictive
A nice design is one thing, but you also need to see value in it. It must either solve a problem for you, or be a pleasurable distraction. Time and time again. In other words, it must be addictive. One of the current trendy services on the Web is Pinterest, an online pinboard that has become an addiction for many. In a text-heavy social Web, Pinterest has nailed the concept of a completely visual user experience. It solves a problem, because it gives you a place to store images around topics - such as the very popular wedding dresses section. It brings you back every day, if you get hooked.
3. Fast Start
The Kindle Fire as a product is not as aesthetically pleasing as the iPad 2. The Fire is rectangular and small, looking a bit like the iPad’s runty little brother. But what the Kindle Fire does far better than the iPad is get the user started - and hooked - straight out of the box. With the iPad, you need to connect to iTunes to get things started, which can often be a time consuming and awkward experience for newbies. [Update: a commenter noted that iTunes isn’t necessary with iOS5, although you will still need to manually set up your account.] But the Kindle Fire comes pre-loaded with your Amazon profile, which enables most users to start downloading content as soon as they switch the device on for the first time.
Note that the rest of the Kindle Fire’s user experience is not always pleasurable. But the start up is one part that is.
4. Seamless
With so many Internet-connected devices and screens nowadays, it’s important to have a consistent experience. One recent example of this for me is the online music app Rdio. It only just became available in my country, but I was immediately impressed by the consistent user interface between Rdio’s iPhone app and the desktop app on my computer. Rdio takes that seamlessness a step further though, in allowing you to download whole albums onto your mobile device so that you can listen to them offline. It would’ve been easy for Rdio to get that functionality wrong, for example by enabling download on 3G and giving you a huge cellphone bill. But by default, Rdio only downloads songs onto your mobile phone using WiFi (you can turn on 3G download if you think you can afford it). It’s the little details like that which make a great user experience.
5. It Changes You
Arguably the most outstanding tech products are ones that revolutionize the way we do things. The iPhone and iPad are two high profile examples from recent years. Twitter is another. These are products that create a brand new user experience, or change old habits in a good way.
When I asked for examples of a great user experience over on Google+, Chris Brogan commented that FitBit has changed the way he manages his fitness. “The information it gathers is useful,” said Chris, “plus the way it’s displayed to me challenges me to do more with it.”
Having an overall great user experience is difficult to pull off. Some of the products mentioned above only get part of it right, for example Kindle Fire and Path. I even said that the iPad, an otherwise glorious product, is slightly disappointing in the start up.
What products or apps have given you a great user experience recently? We’d love to hear about what’s making you happy.
Sunday, January 29, 2012
Tech and politics
How do you turn technology nerds into political experts? That’s the question being asked by Engine Advocacy, a group dedicated to getting “tech startups, entrepreneurs and technologists” involved in shaping public policy.
The goal of Engine Advocacy is “to give entrepreneurial people and businesses a voice in the Washington policy arena that they haven’t before,” according to co-founder Michael McGeary.
The group has a stake in a variety of issues, including an open Internet, intellectual property rights, privacy laws, broadband access, spectrum reform and immigration reform. (Why immigration? Engine Advocacy wants a “startup visa” to make it easer for people to come to the U.S. to innovate.)
Engine Advocacy has no registered lobbyists working for it. Instead, the organization seeks to teach Silicon Valley about Washington, D.C and to give technological innovators “action tools” for getting involved with public policy.
“Most people realize it’s not good enough as an entrepreneur or startup CEO to take the feeling of ‘let me do my job,’” says McGeary. “I come from the political world, I’ve worked on a couple of campaigns and I’ve come to Silicon Valley and I’ve been heartened to talk to so many smart people that are saying ‘ok, let’s figure out how to do this so we don’t have to be passive all the time.’”
McGeary says his organization is a “loosely formed coalition” that’s growing “quickly by the day.” The idea to start the organization came before SOPA (Stop Online Piracy Act) and PIPA (PROTECT IP Act) became the hot-button issues of the day, but according to McGeary, they were the sparks that “set the building on fire,” so to speak.
“What we thought was a good idea in the Fall turned into ‘we have to do this right now.”
“What we thought was a good idea in the fall turned into ‘we have to do this right now,’” says McGeary. “[SOPA and PIPA] were a galvanizing moment.”
Engine Advocacy isn’t just trying to educate tech innovators about Washington, it’s also doing the reverse. The organization is making an effort to educate politicians on technology and Internet issues.
“We’ve met with several members of congressional staff,” says McGeary, singling out Sen. Moran of Kansas.
“(Sen. Moran) and his staff are really committed to tech issues and wanting to get more education about them and trying to find ways to legislate in more productive ways. We’re young in the Senate, but together there’s power in injecting these two communities and I’ve been glad about that.”
SEE ALSO: What is ACTA? Why Should You Care? | ACTA ‘Is More Dangerous Than SOPA’
With SOPA and PIPA gone, what’s the next big fight for Engine Advocacy? We asked McGeary if ACTA (Anti-Counterfeiting Trade Agreement) was on Engine Advocacy’s radar.
“Yes, but it appears to be mostly complete at this point. We’ll keep our eye on it as it rolls out, however, to see what implications there are for tech business going forward,” he said.
“We’re keeping our eyes on (SOPA and PIPA), of course, just in case they make a stunning, election-year comeback from being mortally wounded,” says McGeary. “Beyond that, we’re now taking some time to build and strengthen our organization and begin rolling out our legislative priorities for 2012, as well as beginning to develop campaign strategies looking toward the Fall. We’re looking at things like Startup Act and spectrum coming down the pike fairly quickly, but also beginning to beef up our web presence and policy research to be ready for the next fights as they come along.”
Do you think it’s a good idea to get tech experts and innovators involved with the public policy process? Sound off in the comments below.
The goal of Engine Advocacy is “to give entrepreneurial people and businesses a voice in the Washington policy arena that they haven’t before,” according to co-founder Michael McGeary.
The group has a stake in a variety of issues, including an open Internet, intellectual property rights, privacy laws, broadband access, spectrum reform and immigration reform. (Why immigration? Engine Advocacy wants a “startup visa” to make it easer for people to come to the U.S. to innovate.)
Engine Advocacy has no registered lobbyists working for it. Instead, the organization seeks to teach Silicon Valley about Washington, D.C and to give technological innovators “action tools” for getting involved with public policy.
“Most people realize it’s not good enough as an entrepreneur or startup CEO to take the feeling of ‘let me do my job,’” says McGeary. “I come from the political world, I’ve worked on a couple of campaigns and I’ve come to Silicon Valley and I’ve been heartened to talk to so many smart people that are saying ‘ok, let’s figure out how to do this so we don’t have to be passive all the time.’”
McGeary says his organization is a “loosely formed coalition” that’s growing “quickly by the day.” The idea to start the organization came before SOPA (Stop Online Piracy Act) and PIPA (PROTECT IP Act) became the hot-button issues of the day, but according to McGeary, they were the sparks that “set the building on fire,” so to speak.
“What we thought was a good idea in the Fall turned into ‘we have to do this right now.”
“What we thought was a good idea in the fall turned into ‘we have to do this right now,’” says McGeary. “[SOPA and PIPA] were a galvanizing moment.”
Engine Advocacy isn’t just trying to educate tech innovators about Washington, it’s also doing the reverse. The organization is making an effort to educate politicians on technology and Internet issues.
“We’ve met with several members of congressional staff,” says McGeary, singling out Sen. Moran of Kansas.
“(Sen. Moran) and his staff are really committed to tech issues and wanting to get more education about them and trying to find ways to legislate in more productive ways. We’re young in the Senate, but together there’s power in injecting these two communities and I’ve been glad about that.”
SEE ALSO: What is ACTA? Why Should You Care? | ACTA ‘Is More Dangerous Than SOPA’
With SOPA and PIPA gone, what’s the next big fight for Engine Advocacy? We asked McGeary if ACTA (Anti-Counterfeiting Trade Agreement) was on Engine Advocacy’s radar.
“Yes, but it appears to be mostly complete at this point. We’ll keep our eye on it as it rolls out, however, to see what implications there are for tech business going forward,” he said.
“We’re keeping our eyes on (SOPA and PIPA), of course, just in case they make a stunning, election-year comeback from being mortally wounded,” says McGeary. “Beyond that, we’re now taking some time to build and strengthen our organization and begin rolling out our legislative priorities for 2012, as well as beginning to develop campaign strategies looking toward the Fall. We’re looking at things like Startup Act and spectrum coming down the pike fairly quickly, but also beginning to beef up our web presence and policy research to be ready for the next fights as they come along.”
Do you think it’s a good idea to get tech experts and innovators involved with the public policy process? Sound off in the comments below.
Wednesday, January 25, 2012
CTO's in the cloud
As Capgemini’s CTO for North America, Joe Coyle hears an awful lot about cloud computing. He hears it from customers that want to evaluate cloud solutions and from vendors that want to win that business. Capgemini, a $12 billion global systems integrator, has relationships with all the major vendors and many enterprise customers, so it’s interesting to hear what Coyle has to say about the current state of the market.
Here are my main takeaways from a recent conversation with him.
1: IBM is cloudier than you think.
Big Blue has a pretty potent set of cloud options but it’s going about its business very cleverly. Given it’s big-iron heritage, IBM rarely talks about the hardware component of its cloud portfolio, Coyle said.
“They’re attacking this from a software perspective. They’ve taken Tivoli and are building this software umbrella so that you can take whatever you’re running in your data center now and put all or part of it in a public or private cloud,” he noted. IBM’s 2010 acquisition of Cast Iron also give it a slick appliance that lets customers integrate in-house apps with SaaS applications running outside.
He doesn’t see IBM cloud penetrating a ton of new smaller businesses, but for many existing IBM shops — and there are a ton of them — IBM cloud is a no brainer.
2: Microsoft Azure has a tough row to hoe
Coyle is of two minds on Windows Azure, the platform-as-a-service (PaaS) underlying Microsoft’s cloud strategy.
“Azure’s been a bit of a disappointment,” he said. “When Microsoft briefed us on it years ago, all the national [systems integrators] were chomping at the bit. But then it stumbled.”
“Then the message was the software would only run on Azure. That’ s fine, but by that point, the world had moved on, companies were already using Amazon,” he said. The usual argument that Azure is a PaaS while Amazon Web Services (AWS) is Infrastructure-as-a-Service (IaaS) simply doesn’t matter to most customers. The big AWS draw is they know they can deploy their applications on AWS now and move them to another hosted or in-house data center, later.
On the plus side, the Azure technology is solid and, unlike previous Microsoft development technologies, forces developers to follow the rules — they can’t design software services that misbehave. ”Azure is extremely powerful and if [Microsoft] can get its act together people will try it,” Coyle said.
But overshadowing all that technical mastery is the perception of Azure as a closed platform — despite its multi-language support. Microsoft’s single biggest problem is customer suspicion that it will use Azure to lock them into the next wave of Microsoft technologies, essentially replacing the Windows/Office upgrade cycle.
“I’m not saying it’s true, but it’s what people think,” Coyle said.
3: Amazon is Amazon
Amazon Web Services are what they are: extremely flexible and leading the league in public cloud. AWS suffered a couple black eyes in 2011 with an embarrassing four-day outage in April and then a widespread reboot glitch later in the year.
Coyle is pretty forgiving of these miscues. The April outage, he said, was largely due to people implementing their work incorrectly, something that AWS tried to fix manually. There are things you can do now in AWS to prevent this stuff, to build in more reliability and redundancy, although users will have to pay for it, he said.
The bottom line? Glitches and all, Amazon is the incumbent public cloud power and will stay that way, he said.
4: OpenStack as big-time cloud disruptor
Coyle is also bullish on the OpenStack movement, which is building a standard cloud foundation out of open-source tools. Initiated by Rackspace and NASA, it’s achieved critical mass with nearly every IT provider — from Dell, to HP, to Cisco, to Citrix — aboard and Rackspace offloading management to a more neutral OpenStack Foundation.
“OpenStack will change the world of cloud computing. As a lot of smaller companies look to build their own clouds, this will be a natural choice,” Coyle said.
Who stands to lose if that’s the case? Ironically, the Dells and HPs of the world — all of which are building their own clouds. “Why do you think they joined?” His feeling is these hardware companies — many of which were building their own more vendor-specific clouds — are hedging their bets.
Will OpenStack affect Amazon? “No. Amazon is Amazon,” he said.
5: CIOs are getting over cloud phobia
It’s taken time, but the economics of cloud computing are too good for CIOs to ignore, Coyle said. Any doubts they had about moving at least some corporate data to an outside cloud storage provider, for instance, have evaporated in recent months.
And they’re getting emboldened to do more than storage. The advent of Hadoop and NoSQL technologies means that companies could actually get some use out of all that old stuff sitting on tape or in platters, he said. Uploading that information, and massaging it with the latest analytics means that historical data can be used to test assumptions and new models, for example, seeing what a price change means to sales over time.
Wringing real value out of old data is a pretty good proposition for most CIOs.
Here are my main takeaways from a recent conversation with him.
1: IBM is cloudier than you think.
Big Blue has a pretty potent set of cloud options but it’s going about its business very cleverly. Given it’s big-iron heritage, IBM rarely talks about the hardware component of its cloud portfolio, Coyle said.
“They’re attacking this from a software perspective. They’ve taken Tivoli and are building this software umbrella so that you can take whatever you’re running in your data center now and put all or part of it in a public or private cloud,” he noted. IBM’s 2010 acquisition of Cast Iron also give it a slick appliance that lets customers integrate in-house apps with SaaS applications running outside.
He doesn’t see IBM cloud penetrating a ton of new smaller businesses, but for many existing IBM shops — and there are a ton of them — IBM cloud is a no brainer.
2: Microsoft Azure has a tough row to hoe
Coyle is of two minds on Windows Azure, the platform-as-a-service (PaaS) underlying Microsoft’s cloud strategy.
“Azure’s been a bit of a disappointment,” he said. “When Microsoft briefed us on it years ago, all the national [systems integrators] were chomping at the bit. But then it stumbled.”
“Then the message was the software would only run on Azure. That’ s fine, but by that point, the world had moved on, companies were already using Amazon,” he said. The usual argument that Azure is a PaaS while Amazon Web Services (AWS) is Infrastructure-as-a-Service (IaaS) simply doesn’t matter to most customers. The big AWS draw is they know they can deploy their applications on AWS now and move them to another hosted or in-house data center, later.
On the plus side, the Azure technology is solid and, unlike previous Microsoft development technologies, forces developers to follow the rules — they can’t design software services that misbehave. ”Azure is extremely powerful and if [Microsoft] can get its act together people will try it,” Coyle said.
But overshadowing all that technical mastery is the perception of Azure as a closed platform — despite its multi-language support. Microsoft’s single biggest problem is customer suspicion that it will use Azure to lock them into the next wave of Microsoft technologies, essentially replacing the Windows/Office upgrade cycle.
“I’m not saying it’s true, but it’s what people think,” Coyle said.
3: Amazon is Amazon
Amazon Web Services are what they are: extremely flexible and leading the league in public cloud. AWS suffered a couple black eyes in 2011 with an embarrassing four-day outage in April and then a widespread reboot glitch later in the year.
Coyle is pretty forgiving of these miscues. The April outage, he said, was largely due to people implementing their work incorrectly, something that AWS tried to fix manually. There are things you can do now in AWS to prevent this stuff, to build in more reliability and redundancy, although users will have to pay for it, he said.
The bottom line? Glitches and all, Amazon is the incumbent public cloud power and will stay that way, he said.
4: OpenStack as big-time cloud disruptor
Coyle is also bullish on the OpenStack movement, which is building a standard cloud foundation out of open-source tools. Initiated by Rackspace and NASA, it’s achieved critical mass with nearly every IT provider — from Dell, to HP, to Cisco, to Citrix — aboard and Rackspace offloading management to a more neutral OpenStack Foundation.
“OpenStack will change the world of cloud computing. As a lot of smaller companies look to build their own clouds, this will be a natural choice,” Coyle said.
Who stands to lose if that’s the case? Ironically, the Dells and HPs of the world — all of which are building their own clouds. “Why do you think they joined?” His feeling is these hardware companies — many of which were building their own more vendor-specific clouds — are hedging their bets.
Will OpenStack affect Amazon? “No. Amazon is Amazon,” he said.
5: CIOs are getting over cloud phobia
It’s taken time, but the economics of cloud computing are too good for CIOs to ignore, Coyle said. Any doubts they had about moving at least some corporate data to an outside cloud storage provider, for instance, have evaporated in recent months.
And they’re getting emboldened to do more than storage. The advent of Hadoop and NoSQL technologies means that companies could actually get some use out of all that old stuff sitting on tape or in platters, he said. Uploading that information, and massaging it with the latest analytics means that historical data can be used to test assumptions and new models, for example, seeing what a price change means to sales over time.
Wringing real value out of old data is a pretty good proposition for most CIOs.
Tuesday, January 24, 2012
New school interviewing
Union Square Ventures recently posted an opening for an investment analyst.
Instead of asking for résumés, the New York venture-capital firm—which has invested in Twitter, Foursquare, Zynga and other technology companies—asked applicants to send links representing their "Web presence," such as a Twitter account or Tumblr blog. Applicants also had to submit short videos demonstrating their interest in the position.
Union Square says its process nets better-quality candidates —especially for a venture-capital operation that invests heavily in the Internet and social-media—and the firm plans to use it going forward to fill analyst positions and other jobs.
A résumé doesn't provide much depth about a candidate, says Christina Cacioppo, an associate at Union Square Ventures who blogs about the hiring process on the company's website and was herself hired after she compiled a profile comprising her personal blog, Twitter feed, LinkedIn profile, and links to social-media sites Delicious and Dopplr, which showed places where she had traveled.
"We are most interested in what people are like, what they are like to work with, how they think," she says.
John Fischer, founder and owner of StickerGiant.com, a Hygiene, Colo., company that makes bumper and marketing stickers, says a résumé isn't the best way to determine whether a potential employee will be a good social fit for the company. Instead, his firm uses an online survey to help screen applicants.
Questions are tailored to the position. A current opening for an Adobe Illustrator expert asks applicants about their skills, but also asks questions such as "What is your ideal dream job?" and "What is the best job you've ever had?" Applicants have the option to attach a résumé, but it isn't required. Mr. Fischer says he started using online questionnaires several years ago, after receiving too many résumés from candidates who had no qualifications or interest. Having applicants fill out surveys is a "self-filter," he says.
A previous posting for an Internet marketing position had applicants rate their marketing and social-media skills on a scale of one to 10 and select from a list of words how friends or co-workers would describe them. Options included: high energy, type-A, laid back, perfect, creative or fun.
In times of high unemployment, bypassing résumés can also help companies winnow out candidates from a broader labor pool.
IGN Entertainment Inc., a gaming and media firm, launched a program dubbed Code Foo, in which it taught programming skills to passionate gamers with little experience, paying participants while they learned. Instead of asking for résumés, the firm posted a series of challenges on its website aimed at gauging candidates' thought processes. (One challenge: Estimate how many pennies lined side by side would span the Golden Gate Bridge.)
It also asked candidates to submit a video demonstrating their love of gaming and the firm's products.
IGN is a unit of News Corp., which also owns The Wall Street Journal.
Nearly 30 people out of about 100 applicants were picked for the six-week Code Foo program, and six were eventually hired full-time. Several of the hires were nontraditional applicants who didn't attend college or who had thin work experience.
"If we had just looked at their résumés at the moment we wouldn't have hired them," says Greg Silva, IGN's vice president of people and places. The company does require résumés for its regular job openings.
At most companies, résumés are still the first step of the recruiting process, even at supposedly nontraditional places like Google Inc., which hired about 7,000 people in 2011, after receiving some two million résumés. Google has an army of "hundreds" of recruiters who actually read every one, says Todd Carlisle, the technology firm's director of staffing.
But Dr. Carlisle says he reads résumés in an unusual way: from the bottom up.
Candidates' early work experience, hobbies, extracurricular activities or nonprofit involvement—such as painting houses to pay for college or touring with a punk rock band through Europe—often provide insight into how well an applicant would fit into the company culture, Dr. Carlisle says.
Plus, "It's the first sample of work we have of yours."
Instead of asking for résumés, the New York venture-capital firm—which has invested in Twitter, Foursquare, Zynga and other technology companies—asked applicants to send links representing their "Web presence," such as a Twitter account or Tumblr blog. Applicants also had to submit short videos demonstrating their interest in the position.
Union Square says its process nets better-quality candidates —especially for a venture-capital operation that invests heavily in the Internet and social-media—and the firm plans to use it going forward to fill analyst positions and other jobs.
Companies are increasingly relying on social networks such as LinkedIn, video profiles and online quizzes to gauge candidates' suitability for a job. While most still request a résumé as part of the application package, some are bypassing the staid requirement altogether.
John Fischer, founder and owner of StickerGiant.com, a Hygiene, Colo., company that makes bumper and marketing stickers, says a résumé isn't the best way to determine whether a potential employee will be a good social fit for the company. Instead, his firm uses an online survey to help screen applicants.
Questions are tailored to the position. A current opening for an Adobe Illustrator expert asks applicants about their skills, but also asks questions such as "What is your ideal dream job?" and "What is the best job you've ever had?" Applicants have the option to attach a résumé, but it isn't required. Mr. Fischer says he started using online questionnaires several years ago, after receiving too many résumés from candidates who had no qualifications or interest. Having applicants fill out surveys is a "self-filter," he says.
A previous posting for an Internet marketing position had applicants rate their marketing and social-media skills on a scale of one to 10 and select from a list of words how friends or co-workers would describe them. Options included: high energy, type-A, laid back, perfect, creative or fun.
In times of high unemployment, bypassing résumés can also help companies winnow out candidates from a broader labor pool.
IGN Entertainment Inc., a gaming and media firm, launched a program dubbed Code Foo, in which it taught programming skills to passionate gamers with little experience, paying participants while they learned. Instead of asking for résumés, the firm posted a series of challenges on its website aimed at gauging candidates' thought processes. (One challenge: Estimate how many pennies lined side by side would span the Golden Gate Bridge.)
It also asked candidates to submit a video demonstrating their love of gaming and the firm's products.
IGN is a unit of News Corp., which also owns The Wall Street Journal.
Nearly 30 people out of about 100 applicants were picked for the six-week Code Foo program, and six were eventually hired full-time. Several of the hires were nontraditional applicants who didn't attend college or who had thin work experience.
"If we had just looked at their résumés at the moment we wouldn't have hired them," says Greg Silva, IGN's vice president of people and places. The company does require résumés for its regular job openings.
At most companies, résumés are still the first step of the recruiting process, even at supposedly nontraditional places like Google Inc., which hired about 7,000 people in 2011, after receiving some two million résumés. Google has an army of "hundreds" of recruiters who actually read every one, says Todd Carlisle, the technology firm's director of staffing.
But Dr. Carlisle says he reads résumés in an unusual way: from the bottom up.
Candidates' early work experience, hobbies, extracurricular activities or nonprofit involvement—such as painting houses to pay for college or touring with a punk rock band through Europe—often provide insight into how well an applicant would fit into the company culture, Dr. Carlisle says.
Plus, "It's the first sample of work we have of yours."
Wednesday, January 18, 2012
Site stickiness
Site owners, administrators, web business owners, content producers, and everyone in between, are always trying to find the best ways to encourage visitors to spend more time on their sites. It’s hard enough getting people there in the first place, but keeping visitors and customers on the site once there? No walk in the park. Just ask Groupon.
Again, sites can live or die based on engagement. And as one might expect, there are a thousand ways to increase engagement, and there’s obviously been a lot of noise around social as a great facilitator of a stickier (and more enjoyable) user experiences for websites, apps, and businesses. Approaches to making sites and apps more “social” vary — whether it’s by focusing on creating more sharable content, adding comment sections, forums, etc., or by adding re-tweet buttons, “like”, or share buttons, etc. to encourage visitors to share on their social networks of choice.
Thanks to some research (and a nifty infographic) from Gigya, the makers of SaaS technology (or a social CRM platform, if you will) that helps businesses make their websites social, we now have further proof that one of the best ways to encourage repeat visitors is through social logins.
As it has proliferated across the Web, Facebook Connect has been able to let people carry their social graphs with them wherever they go. Now, thanks to Facebook, my friends are no longer confined to the social network, they’re in my movie recommendations, check-ins, and everywhere else. In some ways, it’s pretty invasive, and in most other ways, it makes our experiences better. Take friendsourced recommendations.
As Gigya’s data shows, site owners that incorporate Facebook Connect, Twitter sign in, etc. stand to benefit: Users spend 50 percent more time on sites when they’re logging in through social networks – that’s four more minutes with a social login than with a standard login. Gigya’s data considered the Web, mobile web, and apps.
This is true of page views, too. Users logged in with a social network view twice the amount of pages. Naturally, it seems to follow that when a person logs into a site through their social network, they want to interact with the site with their social graph in tow – and, according to Gigya – is in turn acting as a gateway for user engagement through comments, sharing, game mechanics, and activity feeds.
Want to leave a comment? Sign in through your social network. TechCrunch commenters might be familiar with this one.
And with 800 million users, it’s not surprising that Facebook is the most popular provider or source of social logins, at 61 percent, followed by Yahoo at 15 percent, Google at 12 percent, Twitter at 10 percent, and LinkedIn at 2 percent. While second place is distributed, it does show that there’s probably some worth in providing more than a Facebook login option, though you’ll obviously reach the majority of users that way.
In terms of social plugins, users who interact with commenting systems generally spend the most amount of time on the site, with the same holding true for page views. So, add a comment section. You may come to regret it, but the numbers don’t lie, it increases the amount of time people spend on your site.
Again, sites can live or die based on engagement. And as one might expect, there are a thousand ways to increase engagement, and there’s obviously been a lot of noise around social as a great facilitator of a stickier (and more enjoyable) user experiences for websites, apps, and businesses. Approaches to making sites and apps more “social” vary — whether it’s by focusing on creating more sharable content, adding comment sections, forums, etc., or by adding re-tweet buttons, “like”, or share buttons, etc. to encourage visitors to share on their social networks of choice.
Thanks to some research (and a nifty infographic) from Gigya, the makers of SaaS technology (or a social CRM platform, if you will) that helps businesses make their websites social, we now have further proof that one of the best ways to encourage repeat visitors is through social logins.
As it has proliferated across the Web, Facebook Connect has been able to let people carry their social graphs with them wherever they go. Now, thanks to Facebook, my friends are no longer confined to the social network, they’re in my movie recommendations, check-ins, and everywhere else. In some ways, it’s pretty invasive, and in most other ways, it makes our experiences better. Take friendsourced recommendations.
As Gigya’s data shows, site owners that incorporate Facebook Connect, Twitter sign in, etc. stand to benefit: Users spend 50 percent more time on sites when they’re logging in through social networks – that’s four more minutes with a social login than with a standard login. Gigya’s data considered the Web, mobile web, and apps.
This is true of page views, too. Users logged in with a social network view twice the amount of pages. Naturally, it seems to follow that when a person logs into a site through their social network, they want to interact with the site with their social graph in tow – and, according to Gigya – is in turn acting as a gateway for user engagement through comments, sharing, game mechanics, and activity feeds.
Want to leave a comment? Sign in through your social network. TechCrunch commenters might be familiar with this one.
And with 800 million users, it’s not surprising that Facebook is the most popular provider or source of social logins, at 61 percent, followed by Yahoo at 15 percent, Google at 12 percent, Twitter at 10 percent, and LinkedIn at 2 percent. While second place is distributed, it does show that there’s probably some worth in providing more than a Facebook login option, though you’ll obviously reach the majority of users that way.
In terms of social plugins, users who interact with commenting systems generally spend the most amount of time on the site, with the same holding true for page views. So, add a comment section. You may come to regret it, but the numbers don’t lie, it increases the amount of time people spend on your site.
Friday, January 13, 2012
Atoms as bits
IBM's radical improvement over today's storage devices which, IBM argues, require about a million atoms to hold a bit of information. For those keeping score at home, IBM's discovery could mean storage could one day be possible at 1/83,000th the scale of today's disk drives.
And while the IBM researchers behind the breakthrough say there is no time frame for bringing their work to market, it's clear that the company sees this as a way to one day develop storage that breaks the mold of what's possible today and drastically reduces the size of drives, while significantly boosting their speed and energy efficiency.
In coming up with the atomic-scale memory system, IBM realized an entirely new approach was required if it wanted to break through the physical limits of today's technology, it said. IBM published its findings in "Science" magazine this morning.
The technology industry has depended for years on Moore's Law--in which the number of transistors that can be placed on an integrated circuit doubles every two years--to make smaller and smaller devices. But according to Andreas Heinrich, the lead atomic storage researcher at IBM, ultimately, that means shrinking down to the scale of atoms. And it's not possible to shrink beyond that, he said.
At IBM's Almaden Research Center here, Heinrich and his team set out to see if they could start at the atomic level and build up from there, rather than waiting for Moore's Law to get there in ten or 20 years. "We are explorers in the field of starting from atoms and building structures that might be useful for IBM or other players in industry," Heinrich told CNET.
The question that Heinrich's team has been trying to answer is how many atoms it would take to create a magnetic bit in which it was possible to store information. And they've now arrived at the answer: 12.
And that means that in the future, scientists may be able to apply what they say is an unconventional type of magnetism known as "antiferromagnetism" that could make it possible for data storage at 100 times the level of anything that can be done today.
Computers' understanding of information starts with the bit. A bit can have just two values--one or zero. IBM says that prior to this development, it was unclear how many atoms would be required to create a reliable memory bit.
"With properties similar to those of magnets on a refrigerator, ferromagnets use a magnetic interaction between its constituent atoms that align all their spins--the origin of the atoms' magnetism--in a single direction," IBM said in a release about the storage discovery. "Ferromagnets have worked well for magnetic data storage but a major obstacle for miniaturizing this down to atomic dimensions is the interaction of neighboring bits with each other. The magnetization of one magnetic bit can strongly affect that of its neighbor as a result of its magnetic field. Harnessing magnetic bits at the atomic scale to hold information or perform useful computing operations requires precise control of the interactions between the bits."
In order to achieve this control, Heinrich and his team used IBM Almaden's scanning tunneling microscope to "atomically engineer 12 antiferromagnetically coupled atoms" that were capable of holding on to a data bit for several hours at temperatures as low as 4 degrees Kelvin. And that meant that by leveraging the "inherent alternating magnetic spin directions," Heinrich and his fellow researchers showed that it is possible to place adjacent magnetic bits far closer together than has ever been done before.
While Heinrich has now shown that storage is possible at many magnitudes smaller scale than ever, it will likely be many years before this results in any kind of marketable technology, he said. It will take quite some time before he and his team can move this new technology out of the lab, and creating products from the discovery is a business decision, not one that will be made by Heinrich and his fellow researchers. "We have the luxury of now worrying about [manufacturing]," Heinrich said. "Our mission is to figure out what we want to build, but now worry about how it's practical" because that's a huge next step.
Friday, January 6, 2012
Coders write their own ticket
How is this for a gauge of how desperately technology companies are seeking programmers? Over the weekend, any coder can audition for jobs at companies such as Facebook, Amazon, Groupon and Apple simultaneously — without changing out of their pajamas.
Programmer database startup Interviewstreet is hosting an online coding challenge called CodeSprint beginning Friday, and 75 technology companies will be looking for employment candidates on its leaderboard.
Coders who sign up for the challenge will receive an email on Friday evening when a set of programming problems becomes available. As they solve problems throughout the weekend, they will earn points and can see how they stack up against other participants. After the challenge ends on Sunday night, the participating companies will have the opportunity to contact specific candidates for job interviews based on their performance.
Questions will include basic programming challenges as well as real-world problems. A practice problem in the latter category, for instance, asks users to create a program that finds what time of day any Twitter user tweets most often. Some companies, including Groupon, have created problems that are relevant to their own engineering challenges. In all cases, better code that works faster will earn more points.
This is the second time that Interviewstreet has hosted a coding challenge. The first event, in October, only admitted students at select universities, resulting in 140 job interviews. This upcoming challenge will allow anyone with Internet access to participate.
Interviewstreet is not conducting virtual employment hackathons out of sympathy for unemployed computer scientists (of which there are few). Coding challenges are core to its business, which catalogs programmers based on skills they have proved themselves in. When an employer hires a coder it finds on the site, or through a CodeSprint challenge, they pay the startup $10,000.
Are you participating in the CodeSprint challenge? Let us know in the comments below.
Programmer database startup Interviewstreet is hosting an online coding challenge called CodeSprint beginning Friday, and 75 technology companies will be looking for employment candidates on its leaderboard.
Coders who sign up for the challenge will receive an email on Friday evening when a set of programming problems becomes available. As they solve problems throughout the weekend, they will earn points and can see how they stack up against other participants. After the challenge ends on Sunday night, the participating companies will have the opportunity to contact specific candidates for job interviews based on their performance.
Questions will include basic programming challenges as well as real-world problems. A practice problem in the latter category, for instance, asks users to create a program that finds what time of day any Twitter user tweets most often. Some companies, including Groupon, have created problems that are relevant to their own engineering challenges. In all cases, better code that works faster will earn more points.
This is the second time that Interviewstreet has hosted a coding challenge. The first event, in October, only admitted students at select universities, resulting in 140 job interviews. This upcoming challenge will allow anyone with Internet access to participate.
Interviewstreet is not conducting virtual employment hackathons out of sympathy for unemployed computer scientists (of which there are few). Coding challenges are core to its business, which catalogs programmers based on skills they have proved themselves in. When an employer hires a coder it finds on the site, or through a CodeSprint challenge, they pay the startup $10,000.
Are you participating in the CodeSprint challenge? Let us know in the comments below.
Thursday, January 5, 2012
Cloud Trends
Only a few years ago, cloud computing didn't exist. Or rather, it existed by a dozen other names--such as virtualization, managed hosting, or simply The Internet. Today, it's the must-have feature of every product or service, from mobile phones to cameras to TVs.
Nobody knows this better than enterprise IT professionals, who have to deal with a rising tide of hyperbole and insatiable consumer expectations even as their budgets shrink and the role of technology in business grows. What nobody disputes, however, is that on-demand IT is here to stay. While companies have been relying on software as a service and third-party tools for decades, it has been roughly five years since clouds entered the enterprise IT psyche, introduced by public providers such as Amazon, Google, and Salesforce.com and via private stacks from VMware, Microsoft, and Citrix. Five years is plenty of time to mature. We're much more aware of the limitations and consequences of utility computing. Here, then, are a dozen insights into what the next year will bring, nearly half a decade into the cloud era. In Part I of this two-part series, I'll cover trends seven to 12, in reverse order. In Part II, I'll cover cloud trends one to six.
We talk about writing code, storing data, and managing infrastructure, but these three things will soon be one and the same. While much of the emphasis around cloud computing has been on virtual machines, it's really about data.
-- Compared to the cost of moving bytes around, nearly every other part of computing is free, according to research done by Microsoft nearly a decade ago.
-- Data is what we're worried will leak out. The reason the analogy between clouds and the electrical grid falls apart is that when someone steals your electrons, they don't have your corporate secrets.
-- Availability is a data problem. I can have 50 instances of an application running around the world. That's easy. But getting them to cooperate on sharing and updating a single user record is the hard part. The more copies we make, the more the data can be corrupted, get out of sync, and so on.
-- With the ability to scale out horizontally, we can make applications fast for millions of users. Scaling the data, however, is an entirely different matter. Ask any architect where the bottleneck is, and more often than not they'll point to data and I/O.
Tomorrow's applications will include three kinds of code: instructions for the business process itself; instructions for how to handle the data needed; and instructions for how to manage growth, shrinkage, and failure.
Consider, for example, a customer service application that can run on both public and private clouds. This application accesses a database that contains both innocuous information (the name of a customer, or his purchase history) as well as heavily regulated information (his social security number).
When the application is running in a trusted on-premises environment, the call center operator has access to all of the data. The operator can, after properly verifying a caller's identity, make changes to it. But when the application is running in a different environment--such as a public cloud used as part of a disaster recovery plan--the application can't access the social security data.
To accomplish this task, we need to encrypt the information not at the device or file level, but at the table or field level. The application needs to run with different permissions depending on its circumstances. It also needs to be smart enough to tell the operator what's happening, so that the operator can explain the situation to a caller.
Similarly, if a data center has a problem, the application can re-launch in another data center. But as machines and programs come online, they need to adapt to the new environment: different unique names, addresses, latencies, and so on. We do this through Devops, and platform orchestration systems like Chef, Puppet, and Pallet.
When a machine moves to a new location, it needs to take with it the data required to run. The more data it takes, the better it can run quickly. But the more that it takes, the longer it will take to relocate--and the more it will cost. As a result, there's a tradeoff to be made when moving a workload: just enough metadata and application logic to function, but not so much that things slow down.
There are nascent standards that let programmers declare how data should be handled, making workloads move about the world efficiently, adapting to changing circumstances.
Several weeks ago, I was at a doctor's office with 10 physicians and two assistants. One wall of the waiting room was lined with manila file fodders, each emblazoned with colored stickers and numbers.
I spent a half hour waiting to see the doctor, and in that time, I saw at least three data errors. In one case, a doctor picked up the wrong folder, opened it, and then realized her mistake. In another, an assistant dropped a folder, spilling patient records across the floor. And in a third, an assistant couldn't find a patient's record because it had been misfiled.
The benefits of electronic health records are huge. In addition to overcoming these kinds of errors, health practitioners can work together on a patient, transferring information from specialist to specialist. And researchers can mine the information to understand the efficacy of a cure or the spread of a disease.
Today, we're concerned about putting data in the cloud. For large organizations, that might be a real concern, but for small organizations like doctors' offices, police precincts, and schools--all of which deal with regulated data--leaving information out of the cloud could be a huge mistake.
We criticize the cloud, but we don't compare apples to apples. We don't really understand the costs of paper medical records, evidence stored on analog tape, student information saved in a single spreadsheet. In 2012, we'll start to do a real comparison of on- and off-cloud solutions, and realize that, for many businesses, the real question is what can't be better done in a cloud.
An ever-increasing percentage of our enterprise applications run in virtual environments. We no longer use virtualization solely for increased utilization--that is, putting several virtual machines on one physical one in order to make the best use of its processing capacity. We also do it for operational efficiency, because it's easier to work with virtual bits than physical atoms.
Between the virtual machine and the bare metal on which it runs is a hypervisor, a piece of code whose core function is to trick the operating system into thinking it's running on bare metal. In some cases, companies add another layer beneath the hypervisor, to further streamline operations.
Wednesday, January 4, 2012
Big Data
An objective observer is more likely to detect our errors than we are.” Large data sets and applied analytics will soon be driving daily business decisions, Dennis Berman reports on the News Hub. Photo: AP. .The new year will bring plenty of splashy stories about iPads and IPOs. There is a more important theme gathering around us: How analytics harvested from massive databases will begin to inform our day-to-day business decisions. Call it Big Data, analytics, or decision science. Over time, this will change your world more than the iPad 3. Computer systems are now becoming powerful enough, and subtle enough, to help us reduce human biases from our decision-making. And this is a key: They can do it in real-time. Inevitably, that “objective observer” will be a kind of organic, evolving database. These systems can now chew through billions of bits of data, analyze them via self-learning algorithms, and package the insights for immediate use. Neither we nor the computers are perfect, but in tandem, we might neutralize our biased, intuitive failings when we price a car, prescribe a medicine, or deploy a sales force. This is playing “Moneyball” at life. Enlarge Image Close.It means fewer hunches and more facts. Think you know something about mortgage bonds? These systems are now of such scale that they can analyze the value of tens of thousands of mortgage-backed securities by picking apart the ongoing, dynamic creditworthiness of tens of millions of individual homeowners. Just such a system has already been built for Wall Street traders. Crunching millions of data points about traffic flows, an analytics system might find that on Fridays a delivery fleet should stick to the highways— despite your devout belief in surface-road shortcuts. You probably hate the idea that human judgment can be improved or even replaced by machines, but you probably hate hurricanes and earthquakes too. The rise of machines is just as inevitable and just as indifferent to your hatred. Enlarge Image CloseAssociated Press In the future, we all will play ‘Moneyball’ like BillyBeane, using real-time analytics. .Business people have been having such fantasies of rationalism for decades. Until the last few years, they have been stymied by the cost of storage, slower processing speeds and the flood of data itself, spread sloppily across scores of different databases inside one company. These problems are now being solved. “We’ve just got to the point where the technology really starts to work,” says Michael Lynch, chief executive of Autonomy Corp. Hewlett-Packard Co. just spent $11 billion to buy Autonomy, which vacuums up “unstructured data” then applies it to these analytic approaches. Of course, the hype is growing fast, too. Company valuations in this space have pushed higher, and surely some will falter along the way. That won’t matter much in the long run. The story of 2012 is how these technologies are inching closer to each one of us. For a glimpse, look inside The Schwan Food Co., whose 6,000 roving sales people deliver frozen products to homes of three million customers across the country. Schwan home sales were listless for four straight years, beset by high customer churn and inventory pileups. Over 10 months, the venerable Minnesota company began a program with the aid of Opera Solutions Inc. of New York, an eight-year-old analytics firm. Schwan already had a crude recommendation program. Its sales people could look at six weeks of orders, and suggest purchases from that list. The new project took it into more sophisticated territory: Matching seemingly disparate customers with similar purchase patterns in their past. Opera calls them finding “genetic twins.” It also added ways to track whether customers’ spending was fading from certain categories—say, breakfast foods—and offered product suggestions and discounts to keep the spending intact. Schwan’s database is now pushing out more than 1.2 million dynamically-generated customer recommendations every day, sent directly to drivers’ handheld devices. Opera says Schwan’s revenues are up 3% to 4% because of it. “There is a whole class of things that couldn’t be done five years ago,” says Opera CEO Arnab Gupta, who just landed an $84 million venture investment from investors including Accel-KKR and Silver Lake Sumeru. His company is now valued at around $500 million. “A few years ago it might take a month to run a project involving 30 billion separate calculations. Today it can be done in two to three hours.” The big goal is to push all the heavy back-end work forward to front-line workers, often as a “dashboard” on a handheld device. Soon, a drug saleswoman will have real-time analytics that tell her to focus on the doctors who spent time on social networks that morning, and who are thus more apt to influence colleagues, says Dhiraj C. Rajaram, founder of analytics company Mu Sigma, of Northbrook, Ill. Last week Mu Sigma raised $108 million in venture funding from General Atlantic and Sequoia Capital. A warning awaits, of course. As Mr. Rajaram explains, analytics will eventually become the norm, which will push adaptation and business cycles even faster than they are today. “As computers become better and better, our lives are becoming more and more complex. They create new problems as much as they solve old ones.” Until then, we should take some comfort—however difficult it may feel—that machines will help us eliminate our worst human tendencies. Mr. Kahneman reminds us best: “We often fail to allow for the possibility that evidence that should be critical to our judgment is missing. What we see is all there is.
Tuesday, January 3, 2012
Designers who code
Iteration after iteration, Jobs continued to be dissatisfied with the calculator. Espinosa continued to code, slowly inching his way to perfection. But nothing was quite right. In a flash of both brilliance and perhaps frustration, Espinosa put together a visual builder that let Jobs design the calculator himself by changing the thickness of the lines, the size of the buttons, the shading, and the background, without doing too much technical tinkering. He dubbed it “The Steve Jobs Roll-Your-Own Calculator Construction Set.”
After about 10 minutes, Jobs had dialed in to his perfection. This version of the calculator application was shipped with Mac OS for 15 years.
This was a story about two people. But imagine how powerful it would be if it were about one. What if the design vision of Steve Jobs could be in the same brain as the engineering excellence of Chris Espinosa?
It’s no mistake that this is very much the sort of thing that is most valued within the most effective software teams in Silicon Valley. Let’s call it “the designer who codes.” This is the sort of person can build exactly what he knows people need, with an aesthetic that compliments its use, with no back-and-forth.
Silicon Valley start-up Quora does it this way to great effect. They take the process simplicity to the next level. Every person on lead designer Rebekah Cox’s team is also an engineer. The design doesn’t happen in Photoshop. It happens in the text editor, in code.
“Knowing the technology better means more productive arguments when there are disagreements because everyone speaks the same language,” says Cox.
They’re not the only ones. Unsurprisingly, Facebook (where Cox started her career as a product design lead) has been running its design team in the same way for years. Unlike most software companies where day-to-day and detailed product decisions are made by product managers with business backgrounds, Mark Zuckerberg’s design team is his imperial guard. They work closer to him than any other discipline in the company.
The powerful fusion of great design, great engineering, and real authority in the hands of those people, results in magical user experiences. As we have seen over and over again, this simple dynamic creates truly great products
After about 10 minutes, Jobs had dialed in to his perfection. This version of the calculator application was shipped with Mac OS for 15 years.
This was a story about two people. But imagine how powerful it would be if it were about one. What if the design vision of Steve Jobs could be in the same brain as the engineering excellence of Chris Espinosa?
It’s no mistake that this is very much the sort of thing that is most valued within the most effective software teams in Silicon Valley. Let’s call it “the designer who codes.” This is the sort of person can build exactly what he knows people need, with an aesthetic that compliments its use, with no back-and-forth.
Silicon Valley start-up Quora does it this way to great effect. They take the process simplicity to the next level. Every person on lead designer Rebekah Cox’s team is also an engineer. The design doesn’t happen in Photoshop. It happens in the text editor, in code.
“Knowing the technology better means more productive arguments when there are disagreements because everyone speaks the same language,” says Cox.
They’re not the only ones. Unsurprisingly, Facebook (where Cox started her career as a product design lead) has been running its design team in the same way for years. Unlike most software companies where day-to-day and detailed product decisions are made by product managers with business backgrounds, Mark Zuckerberg’s design team is his imperial guard. They work closer to him than any other discipline in the company.
The powerful fusion of great design, great engineering, and real authority in the hands of those people, results in magical user experiences. As we have seen over and over again, this simple dynamic creates truly great products
Monday, January 2, 2012
The Web
The Internet is not just billions of linked pages, databases, and (increasingly) mobile apps. The Internet is people. It’s you and me. A web page is only as interesting as the last time somebody linked to it. We are the ones who create all of the data that feeds the Internet. Even Google’s search engine—the ultimate algorithm making sense of it all—determines what is important based on what we do. The pages we link to and visit the most are the ones which tend to show up at the top of search results.
Yes, social threatens the primacy of search in that it replaces the search engine’s algorithm with links from people we trust as a way to discover new information. But social sharing becomes just another set of signals to put into the algorithm. Google may not have total access to all the sharing occurring on Facebook or Twitter, but that is why it launched Google+. The value of the social stream is in the data.
The Internet makes us smarter, but we also make the Internet smarter. Information flows back and forth. What is emerging is a division of labor where humans do the things we are better at doing and the Internet—as the vast, global, computing resource available to everyone—does the information tasks that it is better at doing.
We are programing the Internet every day. But the relationship goes two ways. It is also programming us. VC Bryce Roberts calls this notion “programmable people,“ which he defines as:
Some futurists have a notion of computers one day surpassing human intelligence—the so-called Singularity. If this ever does occur, it won’t be any single computer but the Internet itself—a network of computers—which achieves that super-intelligence. But it won’t be just the computers that get smarter. It will be us as well. The only question is what the interface will be between man and machine. We started with screens and keyboards. Now we are moving to touch and voice. The more these machines can do, the more humanlike they become. And vice versa.
Yes, social threatens the primacy of search in that it replaces the search engine’s algorithm with links from people we trust as a way to discover new information. But social sharing becomes just another set of signals to put into the algorithm. Google may not have total access to all the sharing occurring on Facebook or Twitter, but that is why it launched Google+. The value of the social stream is in the data.
The Internet makes us smarter, but we also make the Internet smarter. Information flows back and forth. What is emerging is a division of labor where humans do the things we are better at doing and the Internet—as the vast, global, computing resource available to everyone—does the information tasks that it is better at doing.
We are programing the Internet every day. But the relationship goes two ways. It is also programming us. VC Bryce Roberts calls this notion “programmable people,“ which he defines as:
This interplay of humans and computers augmenting each others actions and amplifying one another’s understanding.Amazon’s Mechanical Turk is the best known example of human labor being used as a computing input. But, lately, more services are popping up which essentially use the Internet is efficiently distribute labor and other resources. Think of services like Skillshare, TaskRabbit or Zaarly which both create new demand for human labor and spread work around using the Internet as a routing mechanism. Or even peer-to-peer marketplaces such Airbnb, Etsy, and Kickstarter. Bryce explains:
In the last decade retailers like Wal Mart used local area networks to programmatically automated their supply chains and inventory management systems. We’re starting to see the same thing happen with programable people. Except today the inventory is comprised of time, skills and available tasks and distributed broadly over the open web. People with time and skill can search available tasks on services like Skillslate, Zaarly and Task Rabbit. For something more personalized, services like Etsy can connect you directly with artisans that have the time and skill available to complete a more tailored task.These are just early examples. The concept of programmable people goes beyond these limited cases. We are an Internet of people. The algorithms which make the Internet run smoothly become better the more we use the Internet. They get smarter because we tell them, either explicitly or through our actions, what to focus on.
Some futurists have a notion of computers one day surpassing human intelligence—the so-called Singularity. If this ever does occur, it won’t be any single computer but the Internet itself—a network of computers—which achieves that super-intelligence. But it won’t be just the computers that get smarter. It will be us as well. The only question is what the interface will be between man and machine. We started with screens and keyboards. Now we are moving to touch and voice. The more these machines can do, the more humanlike they become. And vice versa.
Subscribe to:
Posts (Atom)