This list goes on, but the odds are rather good that in that list lies technology that will be rather large in the next 8-24 months. What is the big deal? If you are of my generation or older you may have a better perspective on this than younger folks accustomed to high level graphics on computers. The last time we saw this possible level of shift in computing display technology was either when the computer made it to the TV screen, or when it graduated from green text to graphical user interfaces. If you think back to the original Macintosh with its black and white tiny screen GUI based mac OS using a mouse it was far from the first time such technology had been built. The big deal was making it mainstream. Going back to Xerox PARC such interfaces had been toyed with for quite some time. This is also the case with virtual and augmented reality. Academic and other research endeavors have been pursuing the concepts of virtual and augmented reality displays for decades. The problem has been finding a suite of technology that was affordable enough, and designs that were polished enough to try and take it main stream. Well it seems we are about to cross that threshold.
A sampling of stories (there are thousands more...)
- NASA is exploring mars with Hololens http://www.theverge.com/2015/1/26/7878735/nasa-mars-exploration-holograms-microsoft-hololens
- Wired talks Nadella, Hololens and the future of Microsoft http://www.wired.com/2015/01/microsoft-nadella
- Magic Leap, a day at the office, https://www.youtube.com/watch?v=kPMHcanq0xM
- Oculus and Eve: Valkarie http://www.engadget.com/2014/02/05/eve-valkyrie-launch-exclusive-oculus-rift-vr-headset/
Are there hurdles remaining? Yes. Magic Leap sounds like it is still tethered to a work station for one thing. But Microsoft seems to be using similar light field technology and they appear close to a release. They have said Hololens will be out with Windows 10. At face value that would seem to mean later on in the summer of this year (2015). Digging a bit deeper it sounds like 2015 will be the year of the Dev kit for Microsoft and 2016, along with a lot of the other players mentioned above, will be the year of the consumer onslaught for this new tech. Oculus Crescent bay (currently demo only) seems to have passed the major milestones set forth by founder Palmer Luckey. Frame rates are above 90hz and delay is at or under 5ms.
I am not sure how fast this will move. Games will take a quick lead as Oculus, Samsung Gear VR and Sony's Morpheus venture into the desktop, mobile and console world of immersive gaming. The attractiveness of this interface is hard to deny for Racing Games, Flight Simulators and probably for other geners. But work will not be to far behind. Microsoft has BIG plans for Hololens. If you buy what Nedella is selling in that Wired story they truly do see this as the next step in computing.
3d has been an on again off again fad since the 70's use of those wonderful blue and red glasses. It seems that turning single 2d screens into 3d displays is forever destined to be a niche capability. But driving stereoscopic 2d screens is about to have its chance to shine... and on its tails is the concept of "Light Field" displays. The difference is fairly simple in concept... but the details are mind bending. The easier of the two is what Oculus is doing. Imagine if you will a typical 3d rendered scene. IE the computer has a 3d coordinate system driving the elements on the screen and mathematically it understands these things to exist in a 3d coordinate system. The render to the screen however is a 2d image from a single perspective. The ultimate limitation here is that with a static shot there can be no parallax (visual effect of two eyes that allows us to perceive depth). However, if you use two individual 3d screens (or two independently viewed images on a single screen) that are rendered as if from two offset points of view (such as the distance between your eyes) and then show each image to an independent eye.... then voila. Your brain now thinks it is seeing a 3 dimensional scene. The rub is that the screen you are looking at is a static distance away. So the physical focus level of your eyes is fixed while the content itself appears to have depth. The biggest issue here is the lack of ability of the user to focus in that field by choice. IE look outside and everything is not in focus, only what you choose to fix on is. But in the case of the rendered screen it is possible for the entire depth to be 'in focus'. Now rendering engines can provide some tricks in this area but it is not the same as your eyes physically altering their shape to bring elements at different distances into focus.
Light field technology on the other hand is a bit different and I'll be damned if I understand all of it. But the bottom line appears to be that it literally recreates the effect of light reaching your eyes from distant objects such that YOU provide the focus and your eye shifts its focus naturally. An early implementation of the premise of this technology was the Lytro camera. While it wasn't a 'display' tech, it was the first commercial product I am aware of that used the concept of "light fields" where imagery was concerned. Light Field displays are essentially working to provide in real time the information that Lytro cameras capture. In the current Lytro model you bring up the information and it displays a chosen depth of focus controlled by the user interface. IE you can re-focus the image but you are always viewing the image in a none light field based display. If I understand this right then using a light field display like Hololens and Magic Leap would allow you to look at an image from a lytro camera and your EYES would provide the means of shifting your depth of field focus rather than a mouse or keyboard command.
This has a few implications. One... it is possible for example to take into account someones vision correction issues and present a light field to them that their brain will then see as 'in focus'. IE dispense with the need for glasses. In effect you apply your prescription information to the display you use rather than have an image presented that you have to use your glasses to see clearly. The simpler technology used by Oculus requires you to keep your glasses so far, though it would seem if you could plug in prescription based lenses used in those headsets in place of the 'standard' lenses used that assume folks using it do not need vision correction. In one case sharing your display could require physically swapping out lenses (or wearing glasses with it) and in the other it could involved switching software settings (say a phone based NFC exchange). Also, because with this tech your eyes are changing focal points it could alleviate vision stress related to maintaining a fixed focal range for extended periods of time. The bane of heavy readers and computer users for years.
So, you may be asking yourself... its all fine and well that this helps NASA explore mars, what exactly can it do for me? Ok, time for a trip into imagination land.
For that I think you have to look pretty closely at the available demo's and sprinkle a little bit of imagination. I for one see a lot of promise in a couple of elements of Microsoft's Hololens demo clip where they show a couple of interesting concepts. First is the notion of pinning a video to a physical location in your house. This could suggest that there will no longer be any need for multiple screens located around the house. This is something already not as common due to mobile devices but this could take that concept to "11". Instead of having to look at a phone you will just be able to designate an empty section of wall to display content on and then if you are looking at it, it is there but if you look elsewhere you may only hear it. More than just the cool factor of having a 'virtual screen' managed by magic in your 'augmented reality' headset this is a good example of the notion of how this technology could in theory obsolete the concept of a traditional monitor. If you are familiar with the concept of multiple desktops then this may seem an intuitive leap into the future. If not you are hopefully in for a treat. Multiple desktops allow you to have lots of windows open and organized across multiple screens without having to have multiple physical screens. Of course you can only look at one physical screen at a time. Switching back and forth between these virtual desktops is not a natural skill. In fact it is something I see as akin to GUI equivalent of the command line. This is because the basic concept of a GUI is that you SEE all of your interface elements where in a command line you often have to "know" them. There are no menu's unless you know the command to list the menu for example. Virtual desktops often require you to know they are there in order to know you would want to navigate between them. Now.... imagine if expanding your desktop were as easy as turning your head, your monitor moves with your gaze and reveals content you have arranged to the left or right. That is something this augmented reality concept Hololens seems to make possible. You could line up a number of video feeds and/or application windows and just turn your head to choose between them, your focus for sound driven by the combination of your head and eye angles.
The second element that changes things would appear to be new ways of manipulating 3d content. A computer mouse is a cartesian coordinate based device. You move in x and y axis only and all interactions on the screen are based on this fact. IE whether moving a mouse, rotating an object you are only sending a best two pieces of information and X and Y coordinate change. Even for using this to navigate an x and y axis based display (ie 2d GUI desktop) has proven to be a somewhat unnatural interface that has to be learned. Ever tried to help someone learn to use a mouse for the first time? There is not much innate about it, it is a learned skill that requires the user to equate movement of one object to a variety of possible movements and actions on another element. Compare this to introducing someone to a touch screen tablet. When you then add a z axis to the display that you are trying to manipulate things just get worse. This is the bane of CAD software and any 3d based computer design software. The idea of reaching your hand out to interact with a computer generated object just as you would a physical object is potentially huge. This is taking the idea of a touch screen and adding a z axis to it so that you can literally reach into the scene. Why is this big? Look at a group of toddlers. They can with very few exceptions, pick up and place blocks to build physical objects. Add magnets and they quickly learn new ways in which they can build things, make them click together (KNEX, Lego, Duplo etc...) and they learn that too. Now, I challenge you to go get a random lego kit and build it, then try and design the same 3d form in ANY computer program. What a toddler or any average adult can do with almost no training is a painful process on a computer. Now go watch the Microsoft Hololens reveal and I think you will get a feel for how that may be about to change. You pick something up by reaching out and 'grabbing' and then rotate/tilt etc... just by moving your hand as you would a real object. Just by adding that natural ability to manipulate in 3d you open up some as yet unreachable computing capabilities for the masses.
The next thing that excites me is the idea that image processing is reaching the point it can actually interpret the world around in somewhat they same fashion that we do in terms of determining the 3d structure of a given area. The concept that a head mounted unit can scan in real time your room and correctly animate a Minecraft session, or character to jump up on your furniture in a realistic manner? That has implications far far far beyond just augmented reality. That means robots can navigate better for cheaper once this kind of tech encounters the kind of economy of scale that is only possible with mass adoption. Am I speaking greek? How about this, this is the same kind of tech that makes a driverless car possible. If this is about to launch in packages that can fit on your head and run on batteries for significant amounts of time, the level of tech running on autonomous cars probably reached and or passed this quite a while back, as they have far less size/power constraints than a personal device like these.
FPV control of devices is going to be a whole new ballgame once you combine the concept of light field photography with light field displays. Take Hololens and its cameras, mount them on a quad copter, wave a magic wand on the data link needed to support this in real time and have fun. Wow.
On the more salacious side... the pornography industry is going to have a field day with this stuff. Keeping it more PG-13 than X, think of say the average 'strip club' scene in your average cop drama. Now imagine being able to actually walk around in that scene yourself. Web cam performers providing adult 'shows' may well end up being the first in line for the technology to create a live 3d scene. On the more G end of the scale... imagine being able to have your WOW pet following you around in real time.... kind of like taking tamagochi to the nth degree.
Taking the magic leap demo to an extreme case. Take the current concept of a LARP and then put one of these headsets on everyone's head that is involved and drive the experience from a shared world that overlays an appropriate physical location. Or in other words... Don't just bring out your WOW pet to follow you around virtually. You could in theory use this to go into a WOW like world. Obviously worlds in which everyone is wearing this kind of headset, or something that could be superimposed over it would work best :-). Not wanting to go to imaginary places? How about "visiting" real places? Take a large open space, mix with high resolution LIDAR scans of a location and you can now walk around "anywhere". There are some obvious issues when various topographies come into play. IE how would you say climb the steps of the colosseum while walking around on a flat plane? Speaking of the colosseum or any similar set of ruins. How about if those computer generated re-constructions could be overlaid on the real site as you walked around? See the Acropolis in Athens as it is today, and as it was believed to be in the past. See scans of the real freizes stored in the British museum in place on the exterior of the parthenon. See a limestone clad pyramid of Giza with real time depiction of sunlight interacting with the 3d model. See the gun crew of the USS Constitution in action when touring the gun deck. See the battle of Gettysburg when visiting the site. See the Normady invasion when walking the beaches. See Dr. Martin Luther King's "I Have a Dream" speech overlaid on the present day DC Mall. See the battle of Helms Deep, or the battle of the Five armies unfold when visiting the filming locations in New Zealand for Lord of the Rings. See a re-creation of (insert significant moment of choice at sports arena of choice) when visiting a major stadium. See the skeleton of a Tyrannosaurus overlaid with a depiction of a living breathing creature while standing next to it.
These are all things that can and in many cases have be done today with a computer. What a display sitting in front of you lacks is the sensation of being in the middle of whatever you are looking at. This is a missing dimension. And adding it will be a big deal if it is successful.
I freely admit that a lot of the above is unabashedly optimistic thinking on my part. I will leave the in depth exploration of the flaws to those who like that sort of things. I am well aware there are still problems to overcome. That isn't the point. I think the point is that a tipping point seems to have been reached where the problems remaining do not outweigh the benefits of commercially exploiting what is already possible. As for what the biggest hurdles remaining are? Even if Microsoft has already successfully crammed all the needed horsepower into the package they are touting for Hololens this is not exactly a set of Ray Bans (or choose stylish headgear of choice) sitting on your head. While it and Oculus are far better than the motorcycle helmet dwarfing abominations of 90's VR world this is still something not expected to be a constant companion ala a smart phone. The Oculus Rift and similar ilk are even worse here than the augmented concepts as they wall you off entirely from reality. IE these are not things you will use out in the wild. Most of the cases shown in demo clips and the ones I talk about above are fairly location dependent. IE you might wear one around the house or at a specific kind of historic venue or something similar. However, you probably are not wearing one of these down to the bar/pub. That said it isn't inconceivable you might break it out at the coffee shop in place of a laptop. So this technology is at an odd mid point between say a smart phone and a desktop computer. I suppose the holy grail here would be if this tech could fit in a much smaller package. The true sci-fi level of this would be something that could work via contact lenses or perhaps through some kind of bio implant. To far fetched? Well I for one think we might as well start figuring out what is over the horizon now because it seems this tech is no longer over the horizon but is instead well in sight and rapidly approaching every day life.
No comments:
Post a Comment