Archive for the ‘new technology’ Category

Stones and String are Snowflakes too!

Fascinating to watch the snowball effect (sorry couldn’t help myself!) of how things are happening around the upcoming introduction of the neat little “Pebble” wrist watch I wrote about last month.

I wrote about how it is an example of the mass personalization of financial models with the super successful funding which the Pebble watch received on Kickstarter.  Today brings this announcement that Pebble and Twine have joined forces to enable the cool Twine device and app to talk to the Pebble watch and alert you to almost any event you want via your wrist.  This video will quickly show you how this works.

Twine for those not familiar with it is a small little box that connects via WiFi to internal and external sensors and sends alerts out via Twitter, Email and the like.  As nicely summarized on their site this lets you :


Listen to your world, talk to the Internet

Want to monitor things and environments remotely without a nerd degree? Maybe you want to get a tweet when your laundry’s done, an email when the basement floods, or a text message when you left the garage door open.

Twine is the simplest way to get the objects in your life texting, tweeting or emailing. Focus on your idea instead of installation or technical stuff. A durable 2.5" square provides WiFi, internal and external sensors, and two AAA batteries that last for months. A simple web app lets you give Twine human-friendly rules — no programming needed.

Yet another great example of how our world is rapidly evolving into one of mass personalization as The Snowflake Effect continues its exponential growth and influence.


Getting to “just right” getting closer

A recent posting on TechCrunch “Check-Ins, Geo-Fences, And The Future Of Privacy” had a good summary of the balancing act between privacy and geo-location and worth a quick read.  The addition location related information is a key component to the critical addition of context required by the the Snowflake Effect principal of getting things “just right” as in just the right stuff to just the right person at just the right time on just the right device in just the right way.  However there is, and likely always will be, the need to keep this location based information in context itself so that your location information is being used when you want it, with whom you want it and where it will add value.  And it can’t require too much explicit input or action on our part as we simply won’t remember and won’t take the time and trouble to do so all the time which severely reduced the value for us and others.  So we need all the help we can get to help us make smart decisions and do so as automatically as possible yet all the while maintaining the various levels of control each of us will want, which in itself is a context based “it depends” type decision that is constantly changing.

And we are getting more and more help with all these decisions from many sources and each of us have an growing army of support in the form of other people and all their input as well as devices that are finally beginning to gain some “smarts” and be able to do more than what we explicitly tell them to do.

It was therefore most interesting to me to read the comment:

As apps and mobile devices become more geo-aware, a balance will need to be struck between the over-sharing creepiness of constant location broadcasting in the background and the annoyance of the constant check-in chore. On Tuesday, at our Disrupt conference, Facebook’s VP of Product Chris Cox described a future where phones are “contextually aware” so that they can “check into flights, find deals at grocery stores,” and do other things for us at that right place, at the right time. “These things take a bunch of clicks now—it’s all wasting time,” he said. “The phone should know what we want.”

And in other location related news:

Latitude’s New Dashboard View Is Exactly What Passive Location Needs

Tweetdeck Adds Location Column, Integrates Foursquare

Friends Around Me IPhone/iPad App Lets You Interact With Friends Or Strangers, Just Like They Were Really There

Context and contextual awareness IS the next great frontier when it comes to technology advancements and the continued exponential rate the Snowflake Effect of mass personalization is increasing.


Rethinking maps, photos and search

MS Adds much more than Bling to Bing Maps

I continue to be impressed by all the great work coming out of Microsoft Research the past year or more and this presentation by Blaise Aguera y Arcas shows some of the most recent examples.  They are certainly providing great answers to my constant asking of “What if the Impossible Isn’t?”

I picked up on Blaise and the work he and his crew were doing over a year ago, initially with their creation of the Seadragon technology which they brought with them when they joined MS research and used it to create Photosynth which continues to amaze and impress me as well.

It seems like much of their research is centered around visual computing and interfaces and in my post a few days ago “Seeing is Believing 2.0?” I mentioned two other MS Research projects in this area, Pivot and SecondLight.  Today this presentation by Blaise is online from the TED 2010 conference where he shows how they are integrating a lot of technology including Flickr, Seadragon and Photosynth along with some Augmented Reality into Bing maps to enable some amazing new capabilities and possibilities.

Watch the video and I think you too will be impressed not only at the technology but with the new level of functionality and value these mashups are driving into maps.  It is really the whole area of using location as the grounding context for a huge array of other information and uses.

The New Frontier?  Going Inside

In one of my original postings on Photosynth I suggested/hoped that what we would see next was an ability a bit like a “cosmic zoom” * where we could zoom either up or down almost infinitely.  Up into the cosmos of space and down onto not only the street but inside of buildings and then below into the sea. 

* If you’ve never seen Cosmic Zoom, or it has been a while since you watched this 1968 video/animation from the National Film Board of Canada, I’ve embedded it at the bottom so you can treat yourself.

As usual change is occurring exponentially so it looks like Microsoft already has my wish!  As you’ve hopefully just seen in the video above Bing maps now provide an inside view capability where we can go from the outside street views into the inside of buildings.  They gain this in part by having dedicated “backpack cams” which can be taken inside public buildings and places to provide detailed inside views.  But my real excitement is about the ability to “go inside” via all the geo tagged photos from “the rest of us” that are posted to Flickr.  This is where the power of Photosynth really shows some promise to me as it stitches together any number of photos by aligning them in 3D space such that we end up with a highly integrated patchwork of all these interrelated photos.  This scales extremely well when you think of how many photos there are in Flickr (5,384 posted in the last MINUTE, 2.2 million geotagged the past month) and the Seadragon technology allows this to all happen extremely fast and with no apparent slowdown no matter how images are involved.  You’ll also see in the demo how they are able to add in video including a live feed that is similarly aligned within the maps and photos.  Imagine when (surely not if) there is the ability to similarly include all the videos up on the web and not “just” photos.

Look Up!

As if this wasn’t all more than enough to continue my fascination, it looks like they are also addressing my cosmic zoom up wish as well.  At the very end of the video Blaise was out of time but managed to demo an upcoming integration of Worldwide Telescope into Bing maps.  I what i thought was a very intuitive implementation you can simply look up into the sky to start seeing the imagery and data of the night sky from wherever you are standing (on the map). 

Location as Context

As I mentioned at the beginning I think we need to start thinking less about these as “maps” or at least redefine what maps are as we evolve more ways to use location as a central form of context within which we work with an view data, especially visual data.  Time is another bit of context and while it is not shown in the video above, because all the photos and imagery is time stamped you can go back and forth in time as well to see what things looked like in the past and then by adding layers on top, Augmented Reality, we can also see future scenarios such as buildings and reconstruction for example.  Blaise mentioned in the early part of his demo that they have found examples where Photosynth has included some very old photos and by setting the time back they are able to see the streets of Seatlle with horses and carriages. 

Rethinking & Redefining

So as you are thinking about all these new capabilities start to think how this is all part of fundamentally redefining or notion of maps and more so of search, (think spatial) and how we are quickly transfoming from consumer to creators.  These and other changes are also tranforming how we find, discover and learn just the right people, things, locations at just the right time.

PS.  FWIW, keep your eyes on both what’s coming out of MS Research as well as what lies ahead for the increased amount of collaboration between Microsoft, Yahoo, Flickr and Facebook.

Reblog this post [with Zemanta]

The Promise of a Personal Assistant is growing

I’ve long lamented the fact that for all the computing power we have at our avail, and in spite of it growing exponentially, it still seems that it is all quite “dumb” in terms of not knowing much about me, my situation, my habits, etc.  I wrote about this back in the 90’s that it was like having an assistant with no memory and no learning ability.  You had to teach them, each time, how to do everything you wanted them to do,  But it seemed inevitable that in the future we would see this change as computers and applications got smarter.  We’ve seen some progress in the years since but it has been mostly on that initial low and flat feeling part of exponential change and I think we are just about to hit the inflection or tipping point in this area. 

One indicator is the recent announcement below from Siri of their voice based assistant app.  It shows much promise and will help you see the distinction between mere voice recognition and voice based assistance. The assistance part comes when the app is able to not just recognize what you said but to interpret it and take actions as a result.  For example it hear you say “I’d like a table for two at Il Fornaio tomorrow night at 7.” and it will do everything from finding the closest Il Fornaio restaurant, display driving directions on a map and book the reservation for two.  There is still much work to be done to make this more ubiquitous and find ways of hooking into many other services, but it is a significant step towards true smart assistance that we all need and I’m optimistic that the exponential curve of the Snowflake Effect of mass customization and personalization continues to shoot every upward.

Siri Launches Voice-Powered iPhone ‘Assistant’

A new app invites you to command your iPhone in the same way that Captain Kirk addressed the Enterprise’s computer.

Siri's visual interface displays a transcription of what you say, then hands the data off to an appropriate web service or search engine.

Siri’s visual interface displays a transcription of what you say, then hands the data off to an appropriate web service or search engine.

Siri, an artificial intelligence-based voice-recognition startup, launched an iPhone app incorporating its technology on Friday. With the app running, you can address requests to your phone verbally, asking it things like, “Will it rain today?” or “Where is a good place for pizza nearby?” and “I’d like a table for two at Il Fornaio tomorrow night at 7.” The Siri app parses the sound, interprets the request, and hands it off to an appropriate web service, such as OpenTable, Yelp, CitySearch, and so on. It displays the results onscreen as it goes, giving you a chance to correct or adjust your request via onscreen taps.

It’s the most sophisticated voice recognition to appear on a smartphone yet. While Google’s Nexus One offers voice transcription capabilities — so you can speak to enter text into a web form, for instance — the Nexus One doesn’t actually interpret what you’re saying.

Read More

Reblog this post [with Zemanta]