The Internet of Things: Part II- Uber is Driving to Where IoT is Headed

Let’s look at a current real-life example of an “Internet of Things” application so we can think about how such systems will work in the future. Most IoT examples are along the lines of the “smart coffee cup” example in the previous blog, or intelligent appliances. While these are certainly valid examples, they are also very narrowly focused on “Things”, which is only one part of IoT.  The real game-changing aspect of IoT is not so much the “Things”, as it is the systems that reason about things and that cause those things to act.  Over time, these full-featured IoT applications will impact the way we live and work just as profoundly as “Web 2.0” applications like Facebook, Twitter and other social media apps do today.  And as we’ll see in our example, this is already starting to happen.

Uber is a mobile application that connects people with taxis and cars for hire. Uber may not be an obvious candidate to be considered an “Internet of Things” application, but I would argue that it is a very good example of what IoT really is, and where it is headed. It also has the advantage of being a current real-world, functioning, revenue-generating system–unlike the rather silly (though hopefully evocative) “smart coffee cup” example above.

By the way, I’m intentionally picking an example of which I personally have no inside or proprietary knowledge; that way, I don’t disclose anybody’s secrets. Note this also means that I am speculating on some of the implementation details, and my selected application—Uber (uber.com)—may implement something differently than the way I describe it. Nonetheless, my guess is I’m not too far off.

If you haven’t used Uber before, here’s how it works. Download the app on your mobile device and register with Uber using your credit card.

  1. When you need a ride, you launch the Uber app on your mobile device.
  2. The Uber app shows your current location on a map, together with the current location of all nearby cars for hire that are currently available. It also shows you the time it would take for the next available car to get to your location. In my case, that time was 2 minutes—but my office is near an airport!
  3. As you watch the map, the locations of the nearby cars change—you can see them all moving around in “real time”.
  4. You can input your destination and get a price quote before you call a car. In my case, the quote was $23 to $31 for a car to take me home—a 15-mile (25 km) drive that typically takes about 25 minutes.
  5. When you decide you want to call a car, you press a button to confirm where you want to be picked up—in case you want to be picked up in a different location. If you want to be picked up where you currently are (or, more specifically, where your phone is), you press one more button to call a car to your current location; otherwise you indicate your preferred pick up place by touching a location on the map. In case you haven’t been counting, this is two button presses—and no data entry—to bring a car to your current location (if you don’t check the price).
  6. As soon as you choose to call a car, you receive the name of the driver and a description of the car (license plate number, color and make of the car) that is coming to pick you up. You can also see your car instantly change course on the map as it comes to get you, which is pretty cool. Social ratings (e.g. “5 stars”) for the car and driver together with comments from previous customers are shown for you to review during the time you have to wait for the car to arrive (2 minutes in my case).
  7. As the driver gets closer, you get a countdown timer and can see the car’s current location. There’s also an option to exchange messages with the driver while he or she is on the way. That’s handy if you need to say, for example, “I’m waiting inside the main lobby door.” You also get an automatic notification that your car has arrived.
  8. When the car arrives, you hop in. If you got a price quote (I always do), the driver already knows where to take you without asking, because you entered the destination address to get the quote. The driver also gets turn-by-turn directions to your destination on his or her phone without having to enter any data.
  9. When you arrive at your destination, you hop out and say thanks. You do not need to pay, leave a tip or even take your phone out of your pocket. The tip is included in the rate (20% by default, though you can change it), and the cost of the ride is automatically deducted from your credit card. A receipt is automatically emailed to you with details of your route as a reminder.

That’s all there is to it. I can summon a car to take me home with a couple of key presses, and without taking money or a credit card out of my pocket. There are various wrinkles and refinements. For example, you can choose the type of car you want (e.g. taxi, limo, SUV, or personal car) which will be reflected in the price you pay; you can rate the driver and car; you can split fares; and various other passenger conveniences. There are also numerous features for the driver, including the ability to rate passengers, and the best places to go to look for new passengers (based on current demand plus movie and show times, seasonal patterns, and so on). Every driver and passenger I’ve talked with agrees this is a very cool system. It’s also a fairly lucrative one. Uber earns its money by taking a 20% cut of each fare, which yielded an estimated $220M in revenues in 2013, on estimated total fares collected of $1.1B.

Now cool as this may be, why is it a good example of IoT? First, note the central role played by the locations of both the car and the passenger, and the near real-time nature of the information that is exchanged. The GPS (global positioning system) chip as well as other location-determining systems that run on each person’s smartphone are what provides the location information. While these location sensors and services are contained in our mobile devices, they are typical of any sensor in any “thing” connected to the Internet of Things. In the Uber case, the “thing” is actually the sensors contained in our and the driver’s smart mobile devices, not literally the passenger, driver or the car. The “Internet” piece is the connection of each sensor, through the mobile device, to Uber’s “brain” and “memory”, which is hosted in the cloud.

I think it’s quite revealing that few people I’ve talked with actually think of Uber as an IoT application even though—as we shall see—it’s a great example. Other than perhaps the car, I didn’t really talk much about “Things” when I described Uber, and I think most users would do the same. Instead we talk primarily of the people involved; that is, passengers and drivers. This is partly because our mobile devices and the sensors they contain have become shorthand for ourselves. We also think of our human needs for transportation and, perhaps, for comfort; that is clearly more important to us than the car itself, as a thing. And unless we’re technically oriented, we never even give a thought to the sensors contained inside of our smart phones.

Our tendency to focus on people and human needs will continue, even in the Internet of Things era. If we ever have that sensor-enabled coffee cup I described in part 1 of this series, I bet we will think “I need more coffee” rather than “my cup says it’s time to refill it”. Our needs and ourselves will remain at the center, no matter how smart our devices become—at least for a very long time.

Continuing our analysis of Uber, once the app is launched, the location sensors associated with both the passenger’s and the driver’s mobile devices (the actual “things” being monitored) are regularly broadcasting their location to a “back end” system that is hosted by Uber on the “Internet Cloud”. While it seems to us like we are directly summoning a car closest to us, that’s not exactly what happens. What happens instead is that our mobile device sends a message to Uber’s Cloud service, saying that person “x” (you, as identified by your physical phone) at such-and-such a geographic location (determined by the sensor inside your phone) wants a car sent to that location, or to another location nearby.

When it receives such a request, Uber’s Cloud service then uses near-real time analytics to determine which car is the best fit to service the request. I am not privy to Uber’s specific algorithms—and could not talk about them even if I was, as these are core to Uber’s business proposition—but presumably they look for geographic proximity / time to arrive, loyalty and lifetime value of that particular driver to Uber, lifetime revenue of the driver (to break ties if two people request service), customer ratings, how lucrative the fare is likely to be, and other factors such as how recently a given driver has been given a fare, the expected duration of the trip, and maybe the driver’s work schedule.  Some of this information will be computed on-the-fly, given the most recent information available—such as the current location of each car. Other information, such as the average fare paid by pick-ups in a given location at a particular time, or the total lifetime value of a particular driver to Uber, are likely to be pre-computed on a scheduled basis, in batch mode.

The net result is that a car is assigned to the passenger very quickly–probably within about a tenth of a second after the system receives your request. Because complex calculations like “lifetime value” are pre-computed, sophisticated metrics can be used to improve the value of business decisions even while making them very quickly. In theory, this allows much better split-second decision making than all but the very best human dispatcher could make. Finally, the result of a decision is “actuated”; that is, put into effect. In the case of Uber, this actuation takes the form of sending a notification to a driver’s phone asking them to pick up a certain passenger at a certain location.  In addition to this instant response, sometimes a complex process may be initiated or taken to the next step as well—for example, perhaps Uber rewards drivers for reaching a certain number of miles, and a particular journey may trigger the process to produce and mail the driver a trophy or gift card.

IoT2
Figure 1: Basic IoT System Architecture—assigning meaning to data

Other IoT applications tend to follow a similar architecture pattern to the one we’ve just outlined for Uber:

  1. Requests, as well as data from sensors and human observers, are transmitted to a cloud-based system;
  2. A “fast decision making” subsystem quickly processes the incoming observations and requests—for example, “I need a car”;
  3. The fast decision making system uses the “context” provided by a system that analyzes multiple sources of data. This context informs its quick decision-making;
  4. Based on the context, the fast decision making system triggers some action—for example, sending a message to a driver to pick us a particular individual at a particular location;
  5. More complex actions can also be triggered by events and analysis—for example, releasing a driver from service if his or her ratings are too low, or sending them a reward for traveling a certain number of miles.

While well suited for the Internet of Things, a similar architecture pattern can be useful in many situations. Whenever quick “contextualized” action is needed in response to a stream of incoming data and requests this approach can be useful. We’ve found elements of this “IoT” architecture very well suited to situations as apparently diverse as mobile advertising and information security, for example.

The key to IoT is the ability to put current observations and requests in a “context”, and then respond to them intelligently. When these observations come from sensors and the response is delivered through mechanical or electronic actuators, the “Internet of Things” label most obviously applies. But the heart of any IoT system is its ability to respond intelligently to events by taking autonomous action. In the Uber case, the heart of their business is their system’s ability to intelligently assign drivers to passengers in a way that encourages the loyalty of both–it’s not just the sensors and actuators in the phones, important as those are. When we talk about the Internet of Things, it’s important to remember that it’s not just the things but context-driven intelligence—or analytics—that will determine the success of the next generation of Internet applications.

The next post in this series describes how an Internet of Things application can quickly make context-aware decisions.

 

This blog has been reprinted with permission from Globallogic. The original blog can be accessed here- https://www.globallogic.com/blog/the-internet-of-things-part-ii/

One Response
  1. Saptadeep Dutta

Leave a Reply

Your email address will not be published. Required fields are marked *


*