After what seems like a never ending sea of hype that I avoided at all costs after the debacle that was the prequels I made it to my first showing of The Force Awakens relatively blank. And I am grateful I did.
First off.... spoilers spoilers spoilers. I reserve the right to talk about the whole movie in my reviews so if you haven't seen it or do not want to know key details stop reading now. You have been warned.
Long and short of it is pretty easy. It is Star Wars. I don't really count the prequels... I suppose they count because they are Lucas but only just. Out with the shiny and absurd amounts of obvious CGI and back in with the grimy dirty and in many cases practical sets. In with the snark and out with the potty jokes. Though we still get an angsty teen... for a change he is the one behind the mask instead of the 'good guy'. If you liked the originals and can turn your brain off on the 'science' and enjoy the ride its a great flick.
The Good:
Lead Characters are perhaps the strongest ever. Rey and Finn have their rough spots but if you compare to "Anni" and Padme or Luke and Leia in the previous trilogies first movies I think it is a pretty easy call.
The 3d star destroyer beauty shots. Wow... almost worth the price of admission for me but I am a total ship freak. Really wanting to see some of those scenes via Oculus and total immersion.
The story. Yes it is a re-take on the original... in part the movie is a take on the whole original trilogy condensed into one flick. But folks bitching about that seem to have no concept of just how basic the original tale was. Lucas distilled the hero's tale and dealt as much in archetypes as he did in any real 'characters' and this movie is little different. His gift was the setting, the effects and the unheard of pace if the story telling. The pacing here is absolutely frantic and rarely missteps. It isn't as far out from the norm as the original were in their time, but that would be difficult. But having a modern star wars flick that stands proud among the modern style of action story telling it helped pioneer is a very good thing indeed. It isn't a surprise... its a well honed formula and it is executed well. The reveal of Ben Solo as Kylo Renn was done well. The confrontation between Father and Son was strong. And by damn hitting all the heartstrings and images in one go it clears the slate for the rest of the story. This was the welcome back and a thank you to the fans. Next we get the payoff.
The Sacrifice. Han dies a good death and explains a lot about why he had top billing and agreed to suit up again... imagine he stipulated just the one flick. Who knows on the billing but Harrison Ford was easily the headliner in terms of what he brought to the set. They set it up well... the bitter story of the fall out of friends and lovers (husband and wife???never does explicitly say) is left largely to the imagination, at least for now. And it provided Fisher a little meat and the scenes they have together I think are good if heavy handed. Time will tell if they age well. Seriously... this is the 3rd first movie sacrifice and it is easily the most powerful. Obi-Wan was strong but you really didn't get to know him and in the final equation he turns blue ghosty... Qui Gon was set up better but was ruined by the overall catastrophe of the movie around it. This s a powerful well loved character whose death while obvious still retains its kick because of the setup and we know there is no blue ghost return. If Ford is in the rest of the movies it will be through flashbacks telling the story of the fall of Ben Solo.
The Comedy. The original was funny and it didn't stoop to low brow stuff for the most part. This hits the familiar notes and does it well.
The music. John Williams owns this music and we only add some more depth to the catalog here. This was apparently JJ Abraham's first movie without his normal music guy... and even his guy said he would rather it be a John Williams Score.
Rey's vision sequence. I really really really want a clip of that to dig through.
The cliff hanger with Rey and Luke. 30+ years... literally... we have been waiting to see the story continue and Luke gets about a minute of screen time and says.... nada, zip, zilch, zero. And it is a good thing. Abraham's gets one thing about Star Wars if he gets nothing else... talk to much and you can destroy it. Sometimes it is best to just provide enough for the audience's imagination to go wild. As much as I hate to not get more story this go around it was an awesome hanging ending leading us to the next story.
The So So:
I am Meh on the whole StarKiller death star to the nth degree choice. There had to be something so a planet sized as opposed to moon sized big ass blaster is I suppose as good as anything. One of the few places I felt cribbing directly from the originals was a bit much. I mean... where do they go from here for a doomsday?
All the good feels aside for the reveal... the absolutely bananas random connection to Han and Chewy after Finn and Rey blast off in the Falcon. Why didn't they set off the gas trap? They couldn't see it wasn't the first order coming on board. Again, don't get me wrong. I don't care how it happened but seeing Han and Chewy walking on to the Falcon again was worth it no matter how crap the setup. But this was one of the crazier non-sequiturs moments in the film.
Ummm... wasn't Leia a Skywalker with crazy force potential? There is plenty more to go where that may come to the fore but she seemed to be in Rebel Leader mode with zero acknowledgment of her Force abilities (if any?). Considering all else it seems an odd omission. To be continued...
Whats with the Resistance vs the Republic vs the First Order? And what exactly got wiped in that star-killer attack? And... while we are on that beast... a planet consumes the energy of a star and that is just the charge cycle? No mention of the destruction to a solar system inherent in quenching its star? I don't get to lost on the details else this would be in the bad section. But compared to the clarity of the death star as a colossal weapon with a clear purpose... the star killer is inconsistently explained and its demonstration purpose is fairly murky. I think it basically wiped out the republic which will now leave the resistance and the first order as the primary players I suppose? OH new Republic... we hardly knew you. Apparently this is one area there is a book that fills in the gaps called "Aftermath". I may need to check it out.
The Bad:
Not so much against the movie as a movie. But there were a couple of things that really stood out to me. Perhaps I am reading to much into it... but I think there are a couple of choices made that will wear thin in the long run.
JJ basically extended a middle finger to the whole 'Midichlorian' mess of over explaining things Lucas made in the prequels... but he perhaps went to far. In many places the lack of attempt to explains things is spot on. But there is one scene in particular... The whole plan the attack discussion played almost like a Mel Brooks Satire scene with every thing except an explicit break of the 4th wall to wink at the audience as if to say... planning... we don't need no steeeinking planning.
Kylo de-masking. Not sure why they had him pull his mask so early. I suppose modern actor contracts are not so hot on "behind a mask the whole time" situations for one. I suppose the intent was to show the struggle with turning away from the dark side If so it failed in my case. If he must be unmasked in the first flick I'd say it should have waited for the confrontation with Han. Might have stood a chance of not ruining the character for me. It wasn't Jar Jar Binks or anything but it really seemed to be one of ... if not the only ... really sour note in the story crafting for me in the movie.
JJ... PLEASE, for the love of all that is holy STOP WITH THE RANDOM FUCKING CREATURE SCENES. And if you do have to have one... do not spend several minutes showing them immediately eating anything they get their tentacles on and then just running off with our hero. I'd say this one is not quite as bad as the ice beast deal in Star Trek that chased Kirk into see old Spock... but it was close. It was also one of the few glaring CGI special effects bits (though still well done). Compare the actors in almost any other scene with real things to react to against this one and it should be a self explanation for why practical effects are a good thing.
Closing Remarks:
As long as the other two hold their own and tell their own tale launched from this familiar start it should be a hell of a ride. There is some truth to the claim this movie is a Fanboy re-hash.... but really, all stories are a re-hash to some extent or another so that doesn't bug me. But to continue building a new generation of fans the story will need to chart a new course of its own. If we get Luke as Yoda in the next film and someone (Finn?) frozen in kelvinite (much colder and solid than carbonate mind you) when Rey takes off prematurely from said training.... I might be a bit ticked off... but if it is shot like this it will still be a fun flick. Here is hoping for more. I mean... how do you get the next "Empire Strikes Back" if all you do is remake "Empire Strikes Back". A 30+ year hiatus is plenty of reason to hit the touchstones and re-establish a solid foundation for the new audience. But I doubt that will carry 3 movies.
A lot o folks are making a big deal about Finn and Rey for the parts they have... and I get it. What irks me about it is how much folks are pointing it out. THAT is the problem folks. If its a thing then its a problem. If they are just the characters and we judge them on that basis alone THAT is when the problem is beginning to be solved. But the longer we spend talking about the fact someone is black in a particular role, or female in a particular role etc... then we simply continue the problem albeit from a slightly more positive place. I seem to recall a quote from Joss Wheedon when asked "Why do you write strong female characters?" He replied " Because you keep asking that question.".
And finally... the most obvious of all questions at the end of the movie. Just exactly who is Rey? I have to say my gut keeps saying Twins and she is Kylo's sister but have a hard time believing there would have been no recognition of that with Han or Leia. Possibly she was born to Leia after Han took off but that still doesn't explain why Leia wouldn't acknowledge her. The rumor mill seems fixed on the notion she is Luke's Daughter which could work. X-wing pilot doll, and helmet. Connection via the light saber and all that seems to point pretty strongly in that direction. Kind of hoping for JJ to pull a rabbit out of the hat and hit us with the moral equivalent of the surprise everyone got the first time around when Vader was revealed as Luke's father. Rey as Luke's daughter would not qualify for that. Considering it seems they did a good job keeping Ben Solo a secret for a reveal, here is hoping for similar success with Rey.
Eagerly awaiting the next installment.
A grab all rant fest, tech review, book review and whatever strikes my fancy to talk about.
Tuesday, December 22, 2015
Wednesday, October 07, 2015
Windows 10 Event - Their baaaaackkkkkkkk
So a funny thing happened over the last 2-3 years. Apple and Microsoft traded places (again). At least when it comes to forefront consumer technology.
Hololens got dev kit delivery dates in Q1 next year. Seems to foreshadow a commercial release next year as well. In terms of augmented realtiy nothing else is close except maybe that weird company that google bought keeps not showing off hardware in public (read vapor ware until hands are laid on it outside of that companies control).
Band 2 looks like a native digital wrist based appliance as opposed to something trying to emulate gears and hands on a watch. Seriously..... VO2 max from something on your wrist???? If that works at all it is just a phenomenal acheivement.
Lumia phones with windows and a dock that lets you run office with a keyboard, mouse and external media (USB drives etc...) if you so desire. This is the first phone as a full computing device option that seems to be viable (appologies to the Motorola Atrix and aborted Palm effort a few years back that were just to far ahead of their time).
Surface 4 is now THE uber tablet form factor tech toy. Full stop. Its a tablet first if a somewhat large one. It has 12 hours of battery life... and can be had in MacBook Air slaughtering trim if you so desire (i7, 16GB RAM, 1TB of storage). Very interested in seeing direct comparison between apple ipad pro and pencil with surface 4 pen on an art app.... And a stress test to see how hot it gets under a serious load along with real world battery life.
Surface book is the best "just one more thing" in a LOOOOOONG time. Is it a tablet? Is it a world beating laptop? Does it last 12 hours? There are questions about this thing but one question that is answered is that it has the tech world abuzz and it is a very impressive bit of tech.
Back in 2006 Apple upended the mobile tech world and showed again for the first time since the Apple ][/Macintosh days just what an incredible thing it is to have veritical integration of all elements of the product including the hardware. Doing that kind of hardware at scale is just something that does not happen over night and it took a while for everyone to wake up to the fact that without it there was no competing with the newly resurgent Apple. Apple's world beating success and profits are not being driven nearly as much by their superior capability (though they are special I grant) as much as it is by there is no competetion playing on the field at their level. Think early Tiger Woods when the reset of the PGA was still a lot of flabby old guys with few tip top physically impressive players. Today;s PGA looks different and while I think a Tiger in his Prime would still shine against the current field there is no way he dominates the way he did if he faced the current level of competition. But that isn't what happend so he is the legend he is in part because of the lack of direct competition in his prime. Google is dabbling in an alternative to what Apple is doing in their Nexus system... and Microsoft is now trying to put a model in play with their stores and surface lines that put them on competitive terms.
However there is a catch. In business there is something called the "First Mover Advantage". In the case of Apple the first mover advantage they still have is the App store. Nobody else is close to it... at least in Mobile. However I think a lot of folks are overlooking that the App store for Macbooks is not nearly so tilted in Apple's favor when compared to the overall Windows eco system. True MS store vs Apple store the comparison is poor. But MS software availability via the web in general is just fine and I dare say still very much largely in Microsofts favor. When it comes to PC software it is still far more likely to exist for Windows than it is for Apple. And that is where M$ has a chance here to horn their way onto the stage at the same level as Apple... and possibly steal their thunder (again) in that they come in and steal the end game. Apple should be scared of Surface... and I hope they counter punch well. We will all benefit from having two equal copmetitors in this space.
Final thought... Surface Pen vs Apple Pencil on the iPad Pro is the only area I am waiting to see if Apple has retained an advantage in any way when it comes to tablets. The demo with disney artists was impressive and as a long time user of iPad styluses it looks like they really have solved the stylus quirks on capacitive systems. For me personally this could end up being the tipping factor on whether I go for a Surface (likely as a device that replaces my current Macbook and Ipad) or stick strictly with Apple. The one activity I do on tablets that compeltely separates it from my laptop use is drawing. An iPad I could do full speed writing on and or lag free drawing with usable force sensitivity would be absolutely awesome. But the draw of an "open" OS (yes windows 10 is WAAAAAY more open than iOS) and a device powerful enough and mobile enough to replace my current split setup is damned appealling... even if it means going back to the blue screen of death dealers. Not sold on the phones, but the windows desktop experience from a phone via that dock is a damned interesting party trick. Ye Gods....am I actually looking forward to a trip to a Microsoft Store??? YEP, and so should you!
Sunday, March 22, 2015
Review: Amazon Echo - my new idiot toy
So as I mentioned a while back I ordered one of Amazon's Echo speakers. It came a bit earlier than expected and has been sitting around the house for a couple of weeks now and as the title might suggest... it has some issues.
The Good:
I am not sure what the gripes are regarding the speaker quality. For a single point speaker it is plenty adequate. It isn't a super mega woofer setup that will blow the clothes off any careless passerby or anything. But it gets loud enough, has reasonable bass levels for the size and is generally free of distortions. A home entertainment system replacement it is not. So long as you keep things in perspective this unit will not disappoint. That said, seeing as one of its most useful features currently it playing your music collection, or prime music it is at best average if you are talking the higher end Bluetooth speaker systems like the Jawbone Jambox etc...
I like the design. Just a simple black cylinder with a blue halo that lights up when it detects an interaction attempt.
Voice recognition over its own noise. Alexa is pretty smart at listening to you if the only things going on are the Echo making noise and you trying to talk to it. Introduce a third party sound source be it a TV or others in the room and well... it works about like you would expect if you have ever tried using audio interfaces in dirty noise environments. I had expected this to be a bit better based on all the discussions about the multiple microphones. But more on that below. Something that takes getting used to is the fact that you do not need to wait for a prompt and in fact if you do it can often lead to the audio prompt ending prematurely as Alexa seems to quickly stop listening if input is not detected. Basically you have to trust that when you clearly say "Alexa" (or "Amazon", the only other voice prompt available) that you naturally continue just as you would when talking to someone you assume will pay attention once they hear their name. While there are some caveats I have to say that generally speaking I was pretty impressed with the recognition ability of the voice recognition... and that was without any training. Something I have been meaning to take the time to do.
Audio weather status is nice and seems to work reliably.
Daily status briefing is a good idea, I haven't really used it much though.
The So So:
Music control is... well lets just say a bit rough around the edges. For example, by default any music played from your library for say a particular artist is played in a shuffle order. In order to get it to play in order you need to request it play the album. Might seem simple enough but it is pretty common for bands to have self titled albums. IE an album that is the same as their name. Alexa does not understand this concept and it just so happened I had a couple of albums with this issue I kept trying to get to play in order. I had to resort to the application.... speaking of which...
The app is not good, it is not bad. In fact I'd have to say it is by far the most effective way to interact with Alexa and that is the problem.
Ordering from Echo. Yes you can do it. Yes I did it once. Not real sure I like the idea with a soon to be more astute young person around. One click is dangerous enough, voice ordering could be very bad.
Timers. One of the most useful functions of Siri, Cortana or Google voice interfaces and always there if you are within shouting distance. The problem? Unlike say a timer\reminder you set on a phone there is no way to add context to the alarm. Something I tried to do was to set an alarm with the following phrase " remind (name) that it is time to go to bed at 8:00 ". This set an alarm for 8:00 but there was no information provided as to why the alarm was going off. This is something you can do with SIRI and the info you provided will pop up on screen with the notification/alarm.
The Bad:
Inability to link music from local stores is a shame. You should not have to store music on amazon to utilize Alexa's music streaming ability. Amazon is about as bad as Apple used to be in the early days of itunes for ignoring the possibility that someone may have music not curated by their system. Considering fire devices and android devices can link to lots of music services it is a shame that the Echo does not take advantage of more than a couple of internet radio solutions beyond Amazon's own Prime service.
The voice recognition with multiple sound sources. So... I have to say the idea that the Echo had multiple Mics seemed to indicate it could possibly do a neat trick in terms of paying attention to voice commands coming from a specific direction and ignoring sources of potential commands emanating from a location different from the one that initiated the attention phrase. Ummmm..... not the case. TV behind the unit, people off to the side and very separated spatially all confuse the crap out of the Echo if they are making noise. Since the cancelation of any noise it is making seems to work I am thinking this thing REALLY needs to be able to interact with the fire TV. It would be nice if it could pull the same trick off when the Fire TV was your TV sound source. For the issue of multiple folks talking they really need to sort out a way to do some more sophisticated echo location of the source of the command phrase and or separation of various vocal signatures to follow the audio coming from a specific source in a dirty noise environment. I have some hope this is something that can be added via software in the future provided the microphones are already suitable for doing accurate triangulation of the origin of multiple audio sources. Why did I think they had done more on this front? Because the device really is voice prompt controlled first. A truly disappointing experience was being told there were actions that could only be achieved via the application. Voice prompt first indeed.
I can create a todo list. I can add to the grocery list. But I can't delete things. That takes the app. In fact with the exception of "stop" for any current activity it seems Alexa does not do very well at all with negatives. For example I have been using Alexa to play "classical" music. This frustratingly causes it to randomly interject tracks from the Phantom of the Opera album I have in my library and I can't figure out a way to tell it Phantom of the Opera is not "classical" music. Saying something like "Stop playing Phantom of the Opera" gets me "playing tracks from Phantom of the Opera". Perhaps if I go into the App I can re-classify the album.
Also, continuing on the subject of classical music, unless you are a true initiate of the subject matter it is damn near impossible to get a specific track to play. For example you cannot ask for just "Play Brandenburg Concerto #5". You have to know something like "Brandenburg Concerto #5 in b flat minor by (some performance group/artist etc...)". Having such a long string almost ensures there will be some misunderstanding in the voice recognition process or you will remember the wrong key for the artist you are selecting or some such.
The lack of integration with the Fire TV. Really, this seems to be a no brainer to me. The Echo should provide an extended interface with a Fire TV.
Conclusion:
I only ordered it because as a Prime subscriber I could get it for 100$ and I figured that if nothing else it would be worthwhile as a bluetooth speaker and at this point, that is about where I am with it. It certainly is not worth the full asking price of $200 as in that range there are definitely better speakers available. Does Alexa make it worth more? As yet I'd say no. I think it would be worth more if it had a big honking battery stuffed into it and lasted forever as a bluetooth speaker when not plugged in. But, as I pointed out in my original post on the subject, Improving the voice interface capabilities is most likely not hardware dependent. I have a growing suspicion that the coming Apple TV upgrade that will very likely add SIRI and some additional HomeKit integration is going to give Alexa a rather uncomfortable spanking. I hope Amazon is working on a way to link up Alexa and Fire TV otherwise I have a rather awkward $100 bluetooth speaker on my hand that occasionally thinks I am talking to it. It is probably a good thing Amazon doesn't allow users to re-define the voice prompt as I would probably choose something very derogatory at the moment.
The Good:
I am not sure what the gripes are regarding the speaker quality. For a single point speaker it is plenty adequate. It isn't a super mega woofer setup that will blow the clothes off any careless passerby or anything. But it gets loud enough, has reasonable bass levels for the size and is generally free of distortions. A home entertainment system replacement it is not. So long as you keep things in perspective this unit will not disappoint. That said, seeing as one of its most useful features currently it playing your music collection, or prime music it is at best average if you are talking the higher end Bluetooth speaker systems like the Jawbone Jambox etc...
I like the design. Just a simple black cylinder with a blue halo that lights up when it detects an interaction attempt.
Voice recognition over its own noise. Alexa is pretty smart at listening to you if the only things going on are the Echo making noise and you trying to talk to it. Introduce a third party sound source be it a TV or others in the room and well... it works about like you would expect if you have ever tried using audio interfaces in dirty noise environments. I had expected this to be a bit better based on all the discussions about the multiple microphones. But more on that below. Something that takes getting used to is the fact that you do not need to wait for a prompt and in fact if you do it can often lead to the audio prompt ending prematurely as Alexa seems to quickly stop listening if input is not detected. Basically you have to trust that when you clearly say "Alexa" (or "Amazon", the only other voice prompt available) that you naturally continue just as you would when talking to someone you assume will pay attention once they hear their name. While there are some caveats I have to say that generally speaking I was pretty impressed with the recognition ability of the voice recognition... and that was without any training. Something I have been meaning to take the time to do.
Audio weather status is nice and seems to work reliably.
Daily status briefing is a good idea, I haven't really used it much though.
The So So:
Music control is... well lets just say a bit rough around the edges. For example, by default any music played from your library for say a particular artist is played in a shuffle order. In order to get it to play in order you need to request it play the album. Might seem simple enough but it is pretty common for bands to have self titled albums. IE an album that is the same as their name. Alexa does not understand this concept and it just so happened I had a couple of albums with this issue I kept trying to get to play in order. I had to resort to the application.... speaking of which...
The app is not good, it is not bad. In fact I'd have to say it is by far the most effective way to interact with Alexa and that is the problem.
Ordering from Echo. Yes you can do it. Yes I did it once. Not real sure I like the idea with a soon to be more astute young person around. One click is dangerous enough, voice ordering could be very bad.
Timers. One of the most useful functions of Siri, Cortana or Google voice interfaces and always there if you are within shouting distance. The problem? Unlike say a timer\reminder you set on a phone there is no way to add context to the alarm. Something I tried to do was to set an alarm with the following phrase " remind (name) that it is time to go to bed at 8:00 ". This set an alarm for 8:00 but there was no information provided as to why the alarm was going off. This is something you can do with SIRI and the info you provided will pop up on screen with the notification/alarm.
The Bad:
Inability to link music from local stores is a shame. You should not have to store music on amazon to utilize Alexa's music streaming ability. Amazon is about as bad as Apple used to be in the early days of itunes for ignoring the possibility that someone may have music not curated by their system. Considering fire devices and android devices can link to lots of music services it is a shame that the Echo does not take advantage of more than a couple of internet radio solutions beyond Amazon's own Prime service.
The voice recognition with multiple sound sources. So... I have to say the idea that the Echo had multiple Mics seemed to indicate it could possibly do a neat trick in terms of paying attention to voice commands coming from a specific direction and ignoring sources of potential commands emanating from a location different from the one that initiated the attention phrase. Ummmm..... not the case. TV behind the unit, people off to the side and very separated spatially all confuse the crap out of the Echo if they are making noise. Since the cancelation of any noise it is making seems to work I am thinking this thing REALLY needs to be able to interact with the fire TV. It would be nice if it could pull the same trick off when the Fire TV was your TV sound source. For the issue of multiple folks talking they really need to sort out a way to do some more sophisticated echo location of the source of the command phrase and or separation of various vocal signatures to follow the audio coming from a specific source in a dirty noise environment. I have some hope this is something that can be added via software in the future provided the microphones are already suitable for doing accurate triangulation of the origin of multiple audio sources. Why did I think they had done more on this front? Because the device really is voice prompt controlled first. A truly disappointing experience was being told there were actions that could only be achieved via the application. Voice prompt first indeed.
I can create a todo list. I can add to the grocery list. But I can't delete things. That takes the app. In fact with the exception of "stop" for any current activity it seems Alexa does not do very well at all with negatives. For example I have been using Alexa to play "classical" music. This frustratingly causes it to randomly interject tracks from the Phantom of the Opera album I have in my library and I can't figure out a way to tell it Phantom of the Opera is not "classical" music. Saying something like "Stop playing Phantom of the Opera" gets me "playing tracks from Phantom of the Opera". Perhaps if I go into the App I can re-classify the album.
Also, continuing on the subject of classical music, unless you are a true initiate of the subject matter it is damn near impossible to get a specific track to play. For example you cannot ask for just "Play Brandenburg Concerto #5". You have to know something like "Brandenburg Concerto #5 in b flat minor by (some performance group/artist etc...)". Having such a long string almost ensures there will be some misunderstanding in the voice recognition process or you will remember the wrong key for the artist you are selecting or some such.
The lack of integration with the Fire TV. Really, this seems to be a no brainer to me. The Echo should provide an extended interface with a Fire TV.
Conclusion:
I only ordered it because as a Prime subscriber I could get it for 100$ and I figured that if nothing else it would be worthwhile as a bluetooth speaker and at this point, that is about where I am with it. It certainly is not worth the full asking price of $200 as in that range there are definitely better speakers available. Does Alexa make it worth more? As yet I'd say no. I think it would be worth more if it had a big honking battery stuffed into it and lasted forever as a bluetooth speaker when not plugged in. But, as I pointed out in my original post on the subject, Improving the voice interface capabilities is most likely not hardware dependent. I have a growing suspicion that the coming Apple TV upgrade that will very likely add SIRI and some additional HomeKit integration is going to give Alexa a rather uncomfortable spanking. I hope Amazon is working on a way to link up Alexa and Fire TV otherwise I have a rather awkward $100 bluetooth speaker on my hand that occasionally thinks I am talking to it. It is probably a good thing Amazon doesn't allow users to re-define the voice prompt as I would probably choose something very derogatory at the moment.
Virtuaugmentality: The coming virtual and augmented reality revolution?
Oculus Rift brought to you by Facebook. Hololens brought to you by Microsoft. Magic leap brought to you by Google. Gear VR brought to you by Samsung powered by Oculus Rift bankrolled by Facebook. Morpheus brought to you by Sony.
This list goes on, but the odds are rather good that in that list lies technology that will be rather large in the next 8-24 months. What is the big deal? If you are of my generation or older you may have a better perspective on this than younger folks accustomed to high level graphics on computers. The last time we saw this possible level of shift in computing display technology was either when the computer made it to the TV screen, or when it graduated from green text to graphical user interfaces. If you think back to the original Macintosh with its black and white tiny screen GUI based mac OS using a mouse it was far from the first time such technology had been built. The big deal was making it mainstream. Going back to Xerox PARC such interfaces had been toyed with for quite some time. This is also the case with virtual and augmented reality. Academic and other research endeavors have been pursuing the concepts of virtual and augmented reality displays for decades. The problem has been finding a suite of technology that was affordable enough, and designs that were polished enough to try and take it main stream. Well it seems we are about to cross that threshold.
A sampling of stories (there are thousands more...)
This list goes on, but the odds are rather good that in that list lies technology that will be rather large in the next 8-24 months. What is the big deal? If you are of my generation or older you may have a better perspective on this than younger folks accustomed to high level graphics on computers. The last time we saw this possible level of shift in computing display technology was either when the computer made it to the TV screen, or when it graduated from green text to graphical user interfaces. If you think back to the original Macintosh with its black and white tiny screen GUI based mac OS using a mouse it was far from the first time such technology had been built. The big deal was making it mainstream. Going back to Xerox PARC such interfaces had been toyed with for quite some time. This is also the case with virtual and augmented reality. Academic and other research endeavors have been pursuing the concepts of virtual and augmented reality displays for decades. The problem has been finding a suite of technology that was affordable enough, and designs that were polished enough to try and take it main stream. Well it seems we are about to cross that threshold.
A sampling of stories (there are thousands more...)
- NASA is exploring mars with Hololens http://www.theverge.com/2015/1/26/7878735/nasa-mars-exploration-holograms-microsoft-hololens
- Wired talks Nadella, Hololens and the future of Microsoft http://www.wired.com/2015/01/microsoft-nadella
- Magic Leap, a day at the office, https://www.youtube.com/watch?v=kPMHcanq0xM
- Oculus and Eve: Valkarie http://www.engadget.com/2014/02/05/eve-valkyrie-launch-exclusive-oculus-rift-vr-headset/
Are there hurdles remaining? Yes. Magic Leap sounds like it is still tethered to a work station for one thing. But Microsoft seems to be using similar light field technology and they appear close to a release. They have said Hololens will be out with Windows 10. At face value that would seem to mean later on in the summer of this year (2015). Digging a bit deeper it sounds like 2015 will be the year of the Dev kit for Microsoft and 2016, along with a lot of the other players mentioned above, will be the year of the consumer onslaught for this new tech. Oculus Crescent bay (currently demo only) seems to have passed the major milestones set forth by founder Palmer Luckey. Frame rates are above 90hz and delay is at or under 5ms.
I am not sure how fast this will move. Games will take a quick lead as Oculus, Samsung Gear VR and Sony's Morpheus venture into the desktop, mobile and console world of immersive gaming. The attractiveness of this interface is hard to deny for Racing Games, Flight Simulators and probably for other geners. But work will not be to far behind. Microsoft has BIG plans for Hololens. If you buy what Nedella is selling in that Wired story they truly do see this as the next step in computing.
3d has been an on again off again fad since the 70's use of those wonderful blue and red glasses. It seems that turning single 2d screens into 3d displays is forever destined to be a niche capability. But driving stereoscopic 2d screens is about to have its chance to shine... and on its tails is the concept of "Light Field" displays. The difference is fairly simple in concept... but the details are mind bending. The easier of the two is what Oculus is doing. Imagine if you will a typical 3d rendered scene. IE the computer has a 3d coordinate system driving the elements on the screen and mathematically it understands these things to exist in a 3d coordinate system. The render to the screen however is a 2d image from a single perspective. The ultimate limitation here is that with a static shot there can be no parallax (visual effect of two eyes that allows us to perceive depth). However, if you use two individual 3d screens (or two independently viewed images on a single screen) that are rendered as if from two offset points of view (such as the distance between your eyes) and then show each image to an independent eye.... then voila. Your brain now thinks it is seeing a 3 dimensional scene. The rub is that the screen you are looking at is a static distance away. So the physical focus level of your eyes is fixed while the content itself appears to have depth. The biggest issue here is the lack of ability of the user to focus in that field by choice. IE look outside and everything is not in focus, only what you choose to fix on is. But in the case of the rendered screen it is possible for the entire depth to be 'in focus'. Now rendering engines can provide some tricks in this area but it is not the same as your eyes physically altering their shape to bring elements at different distances into focus.
Light field technology on the other hand is a bit different and I'll be damned if I understand all of it. But the bottom line appears to be that it literally recreates the effect of light reaching your eyes from distant objects such that YOU provide the focus and your eye shifts its focus naturally. An early implementation of the premise of this technology was the Lytro camera. While it wasn't a 'display' tech, it was the first commercial product I am aware of that used the concept of "light fields" where imagery was concerned. Light Field displays are essentially working to provide in real time the information that Lytro cameras capture. In the current Lytro model you bring up the information and it displays a chosen depth of focus controlled by the user interface. IE you can re-focus the image but you are always viewing the image in a none light field based display. If I understand this right then using a light field display like Hololens and Magic Leap would allow you to look at an image from a lytro camera and your EYES would provide the means of shifting your depth of field focus rather than a mouse or keyboard command.
This has a few implications. One... it is possible for example to take into account someones vision correction issues and present a light field to them that their brain will then see as 'in focus'. IE dispense with the need for glasses. In effect you apply your prescription information to the display you use rather than have an image presented that you have to use your glasses to see clearly. The simpler technology used by Oculus requires you to keep your glasses so far, though it would seem if you could plug in prescription based lenses used in those headsets in place of the 'standard' lenses used that assume folks using it do not need vision correction. In one case sharing your display could require physically swapping out lenses (or wearing glasses with it) and in the other it could involved switching software settings (say a phone based NFC exchange). Also, because with this tech your eyes are changing focal points it could alleviate vision stress related to maintaining a fixed focal range for extended periods of time. The bane of heavy readers and computer users for years.
So, you may be asking yourself... its all fine and well that this helps NASA explore mars, what exactly can it do for me? Ok, time for a trip into imagination land.
For that I think you have to look pretty closely at the available demo's and sprinkle a little bit of imagination. I for one see a lot of promise in a couple of elements of Microsoft's Hololens demo clip where they show a couple of interesting concepts. First is the notion of pinning a video to a physical location in your house. This could suggest that there will no longer be any need for multiple screens located around the house. This is something already not as common due to mobile devices but this could take that concept to "11". Instead of having to look at a phone you will just be able to designate an empty section of wall to display content on and then if you are looking at it, it is there but if you look elsewhere you may only hear it. More than just the cool factor of having a 'virtual screen' managed by magic in your 'augmented reality' headset this is a good example of the notion of how this technology could in theory obsolete the concept of a traditional monitor. If you are familiar with the concept of multiple desktops then this may seem an intuitive leap into the future. If not you are hopefully in for a treat. Multiple desktops allow you to have lots of windows open and organized across multiple screens without having to have multiple physical screens. Of course you can only look at one physical screen at a time. Switching back and forth between these virtual desktops is not a natural skill. In fact it is something I see as akin to GUI equivalent of the command line. This is because the basic concept of a GUI is that you SEE all of your interface elements where in a command line you often have to "know" them. There are no menu's unless you know the command to list the menu for example. Virtual desktops often require you to know they are there in order to know you would want to navigate between them. Now.... imagine if expanding your desktop were as easy as turning your head, your monitor moves with your gaze and reveals content you have arranged to the left or right. That is something this augmented reality concept Hololens seems to make possible. You could line up a number of video feeds and/or application windows and just turn your head to choose between them, your focus for sound driven by the combination of your head and eye angles.
The second element that changes things would appear to be new ways of manipulating 3d content. A computer mouse is a cartesian coordinate based device. You move in x and y axis only and all interactions on the screen are based on this fact. IE whether moving a mouse, rotating an object you are only sending a best two pieces of information and X and Y coordinate change. Even for using this to navigate an x and y axis based display (ie 2d GUI desktop) has proven to be a somewhat unnatural interface that has to be learned. Ever tried to help someone learn to use a mouse for the first time? There is not much innate about it, it is a learned skill that requires the user to equate movement of one object to a variety of possible movements and actions on another element. Compare this to introducing someone to a touch screen tablet. When you then add a z axis to the display that you are trying to manipulate things just get worse. This is the bane of CAD software and any 3d based computer design software. The idea of reaching your hand out to interact with a computer generated object just as you would a physical object is potentially huge. This is taking the idea of a touch screen and adding a z axis to it so that you can literally reach into the scene. Why is this big? Look at a group of toddlers. They can with very few exceptions, pick up and place blocks to build physical objects. Add magnets and they quickly learn new ways in which they can build things, make them click together (KNEX, Lego, Duplo etc...) and they learn that too. Now, I challenge you to go get a random lego kit and build it, then try and design the same 3d form in ANY computer program. What a toddler or any average adult can do with almost no training is a painful process on a computer. Now go watch the Microsoft Hololens reveal and I think you will get a feel for how that may be about to change. You pick something up by reaching out and 'grabbing' and then rotate/tilt etc... just by moving your hand as you would a real object. Just by adding that natural ability to manipulate in 3d you open up some as yet unreachable computing capabilities for the masses.
The next thing that excites me is the idea that image processing is reaching the point it can actually interpret the world around in somewhat they same fashion that we do in terms of determining the 3d structure of a given area. The concept that a head mounted unit can scan in real time your room and correctly animate a Minecraft session, or character to jump up on your furniture in a realistic manner? That has implications far far far beyond just augmented reality. That means robots can navigate better for cheaper once this kind of tech encounters the kind of economy of scale that is only possible with mass adoption. Am I speaking greek? How about this, this is the same kind of tech that makes a driverless car possible. If this is about to launch in packages that can fit on your head and run on batteries for significant amounts of time, the level of tech running on autonomous cars probably reached and or passed this quite a while back, as they have far less size/power constraints than a personal device like these.
FPV control of devices is going to be a whole new ballgame once you combine the concept of light field photography with light field displays. Take Hololens and its cameras, mount them on a quad copter, wave a magic wand on the data link needed to support this in real time and have fun. Wow.
On the more salacious side... the pornography industry is going to have a field day with this stuff. Keeping it more PG-13 than X, think of say the average 'strip club' scene in your average cop drama. Now imagine being able to actually walk around in that scene yourself. Web cam performers providing adult 'shows' may well end up being the first in line for the technology to create a live 3d scene. On the more G end of the scale... imagine being able to have your WOW pet following you around in real time.... kind of like taking tamagochi to the nth degree.
Taking the magic leap demo to an extreme case. Take the current concept of a LARP and then put one of these headsets on everyone's head that is involved and drive the experience from a shared world that overlays an appropriate physical location. Or in other words... Don't just bring out your WOW pet to follow you around virtually. You could in theory use this to go into a WOW like world. Obviously worlds in which everyone is wearing this kind of headset, or something that could be superimposed over it would work best :-). Not wanting to go to imaginary places? How about "visiting" real places? Take a large open space, mix with high resolution LIDAR scans of a location and you can now walk around "anywhere". There are some obvious issues when various topographies come into play. IE how would you say climb the steps of the colosseum while walking around on a flat plane? Speaking of the colosseum or any similar set of ruins. How about if those computer generated re-constructions could be overlaid on the real site as you walked around? See the Acropolis in Athens as it is today, and as it was believed to be in the past. See scans of the real freizes stored in the British museum in place on the exterior of the parthenon. See a limestone clad pyramid of Giza with real time depiction of sunlight interacting with the 3d model. See the gun crew of the USS Constitution in action when touring the gun deck. See the battle of Gettysburg when visiting the site. See the Normady invasion when walking the beaches. See Dr. Martin Luther King's "I Have a Dream" speech overlaid on the present day DC Mall. See the battle of Helms Deep, or the battle of the Five armies unfold when visiting the filming locations in New Zealand for Lord of the Rings. See a re-creation of (insert significant moment of choice at sports arena of choice) when visiting a major stadium. See the skeleton of a Tyrannosaurus overlaid with a depiction of a living breathing creature while standing next to it.
These are all things that can and in many cases have be done today with a computer. What a display sitting in front of you lacks is the sensation of being in the middle of whatever you are looking at. This is a missing dimension. And adding it will be a big deal if it is successful.
I freely admit that a lot of the above is unabashedly optimistic thinking on my part. I will leave the in depth exploration of the flaws to those who like that sort of things. I am well aware there are still problems to overcome. That isn't the point. I think the point is that a tipping point seems to have been reached where the problems remaining do not outweigh the benefits of commercially exploiting what is already possible. As for what the biggest hurdles remaining are? Even if Microsoft has already successfully crammed all the needed horsepower into the package they are touting for Hololens this is not exactly a set of Ray Bans (or choose stylish headgear of choice) sitting on your head. While it and Oculus are far better than the motorcycle helmet dwarfing abominations of 90's VR world this is still something not expected to be a constant companion ala a smart phone. The Oculus Rift and similar ilk are even worse here than the augmented concepts as they wall you off entirely from reality. IE these are not things you will use out in the wild. Most of the cases shown in demo clips and the ones I talk about above are fairly location dependent. IE you might wear one around the house or at a specific kind of historic venue or something similar. However, you probably are not wearing one of these down to the bar/pub. That said it isn't inconceivable you might break it out at the coffee shop in place of a laptop. So this technology is at an odd mid point between say a smart phone and a desktop computer. I suppose the holy grail here would be if this tech could fit in a much smaller package. The true sci-fi level of this would be something that could work via contact lenses or perhaps through some kind of bio implant. To far fetched? Well I for one think we might as well start figuring out what is over the horizon now because it seems this tech is no longer over the horizon but is instead well in sight and rapidly approaching every day life.
Update: T'Rex Rover and Orion Robotics 15a RoboClaw Review
All running off the battery... FINALLY. Still a lot of work to go |
- Pan and Tilt:
- The distance sensor is.... sorta working. Not sure if it is a bad sensor (cheap sainsmart unit) or bad programmer at the moment. It is mounted by rubber band which I need to sort out at some point.
- Camera is throwing a stream up (that is the window on the monitor in the first picture)
- Arduino UNI is talking to the RoboClaw motor controller and I have a serial command solution for fwd/bkwd and pan/tilt control
- back is negative 1-127
- command takes a positive number, fun part was figuring out how to split a ramp that went from front to back or back to front...ie if I was going forward and wanted to go backwards I didn't want it to just throw over to the backwards speed. It slows down forward motion to zero before increasing the backward rate to the requested setting.
- forward is positive 1-127
- 128 - 307 controls pan
- 308 - 487 controls tilt
- Reading values from the RoboClaw (more on this in a bit) so I can figure out encoding for more accurate motor/movement control. Right now it is just giving me the main battery pack voltage.
- Multiple Serial lines are up and confirmed, this stopped some cross talk I was getting between controlling the motors and the Arduino sending data to the Pi.
- The big news: It is now working while self contained. Batteries are connected to the motor controller, it has a 5v 3a circuit that I built a franken USB cable to use to power the pi. Still using the UNO so it is powered via USB from the pi for now. Intend to move to the much smaller Trinket pro and power it from the 5v pass through pin from the Pi.
A picture of the growing rats nest |
What is left:
- Still need to work on a serial line between the Arduino and Pi GPIO pins. Now that I have the multiple serial lines setup that should be easy
- Means I also need to spin up a serial connection program on the pi instead of using the command line or the Arduino IDE serial interface Ultimate goal there will be to connect to the remote system (Macbook initially, iPad, Android apps eventually).
- Decide between 5v trinket pro
and 3.3v trinket pro - 5v will require logic shift to talk to the pi, definitely doesn't require logic shift to talk to the servo's, sensors and motor controller. You can see this setup below on my new pi Hat, trinket is to the right, logic shifter is to the left, camera ribbon goes between them. This is just mocked up right now. Still have to transfer all the wiring from the uno to this board if I can make it all fit. If you zoom in you can see my learning process on the solders. The hat is a bit scorched in places. No idea yet if I over did it or not. Same on the trinket and shifter...
- 3.3v doesn't require logic shift to the pi but may require a shift for the motor controller or sensors. I have a 4 and 8 line shifter if needed.
- If the cabling for the above doesn't work right I may go to the 3.3v trinket to simplify the wiring coming out of the hat. Supposedly the Roboclaw outputs are 3 volt logic tolerant.... so I shouldn't need the shifter for it. Not sure about the ping sensor. Think it needs 5v...
- Sort out some servo freak outs. Getting some random servo jerks when sending serial in and out. Guessing I need to pull the pin low with a resistor or something or maybe add a cap or two. Right now the signal wire is plugged straight into the Arduino so I suspect the pin is floating and getting pulled high here and there. Also possible I fried something on the Arduino so I am going to wait until I have the trinket wired it to try and nail this one down.
- Start building a control program to replace using a VNC session to the Pi desktop. Eventual architecture will be
- Remote (laptop/mobile device) <-> Pi WiFi interface via bi directional packet traffic <-> Pi to Arduino <-> Arduino -> Roboclaw, servos, sensors etc...->->->
- Joystick or other analog process (mifi from mobile?) for motion control
- Pi Camera -> Netcat stream or HTTP webcast -> remote receiving system (laptop/ mobile)
- Joystick or other analog process for pan/tilt control (wifi for mobile?)
New enclosure, pi hat, wi fi (increased range I hope), and camera housing. 4 line logic level shifter and 5v trinket are sorta mounted
|
You can see the scorch marks... as homer simpson might say... "DOUGHT !!!" |
RoboClaw 15a Review:
So far I have liked this controller lineup from Orion Robotics. Prior to these units (ranging from 2x5a - 2x60a) popping up it seemed the only easily available higher amperage hobby controller that wasn't R/C based were the SaberTooth modules you see on most of the various robo building sites. Of particular interest here to me where the various modes of control (usb, serial, PWM) and encoder capability. With a decently encoded motor you can effectively turn relatively cheap DC motors into powerful stepper motor/servos with these controllers if you so desire. While this level of precision is not something I am after with my initial rover setup it is something I want to explore down the road. Until this it looked like I would be doing a separate encoding solution through a micro controller.
It has so far survived my idiocy (see previous post for my infinite current goof) with the exception that the voltage information is reading high. I got the same problem from the circuit I had on the UNO and the RoboClaw was a part of that configuration. I suspect a resistor got partial fried or something. Unfortunately I had no readings prior to that event so I do not know if I caused it or if it was reading high (compared to my multi-meter) originally. I suspect the problem was me... just no way to know for sure short of buying another one.
My only real complaint thus far has been the somewhat sparse documentation. Orion's site forums are not terribly extensive and there seems to be few other places with info beyond some regurgitation of the built in examples they provide and... well they are in some cases not easy to pick up. At least not for this particular noob. They consist of working examples more or less... but none of the 'why' or what the different choices mean. For those familiar with this kind of gear I am sure what is provided is more than sufficient to get your bearings. But, if you are a novice these controllers may not be the best choice. I am not a complete novice... but my basic electronics knowledge is definitely lacking. You can read up on my various learning experiences with this controller in the previous posts on this project. Hopefully they will pop up on google searches if anyone else hits similar snags.
All in all though the units are similar priced to the Sabertooth controllers, seem to have more options and are built well. Definitely worth a look if you are in the market.
Monday, March 09, 2015
Thoughts: Apple Spring Forward Keynote - iWatch and Macbook
So Apple did its big spring announcement and the short version is....
Pricing and final features of the Apple Watch - no real surprises, if you liked it carry on
Brand new MacBook design - It is exciting like the Ford Thunderbird remake
and the really big news, Research kit.... no seriously. But back to that in a bit, got to save the best for last after all :-)
Apple Watch:
First lets talk price vs function:
$350 - $10,000+ pricing. Difference in function? Zip. The utilitarian sport aluminum $350 apple watch will do everything the 18kt gold Apple Watch Edition will do. The only functional difference is style and materials. The truly valuable (from a functional standpoint) difference in watch face of Sapphire (also avaialble starting at $549) seems the more worthy of the two. Gold vrs Aluminum.... that has me scratching my head. Granted I am not a watch person so I place almost no value in precious metals for the sake of precious metals. I was kind of surprised the edition really didn't have more to distinguish itself. In this I grant creedence to the folks saying Apple is to some extent going after the "Vertu" crowd. For those not familiar Vertu has made a business of selling ho humm android phones in fancy casings of precious metals and jewels. However, I would suggest it is more accurate to say that Apple is at risk of only attracting the Vertu crowd rather than who they really desire.... but who is that? Vertu is about folks that place form over function. They do not CARE the Vertu phone does nothing new only that it is more unique and 'stylish'. This is a bit different crowd than you see buying 10k watches where regardless of how valuable you view hand crafted movements and polished micro gearing etc... there is a functional difference you get in the higher end mechanical movement watch world vrs the lower end mechanical movement watch world. Really, stereotypes of silver spoon folks with inherited fortunes and silly sports/pop/lottery winners aside, folks with money generally do not get that money by being stupid with money. Hence my surprise that Apple didn't do a bit more on separating the functionality. This would seem to leave naked snob/peer pressure as what will drive folks to go for the Edition over the more plebeian offerings. On the other hand, separating the rich from the poor functionality wise would cause some development nightmares. The numbers that make the Apple ecosystem so attractive to develop for are kind of the antithesis of a elite device owned by only a select few.
Now lets talk style vrs. function:
Like it or not, understand it or not, agree with it or not, from a social standpoint there are circles in which someone would not be caught dead wearing a 'cheap' or 'common' watch no matter how utilitarian. It would be akin to things like a clip on tie, tennis shoes under a suit, gaudy costume jewelry in inappropriate situations etc... they are signs of not belonging, of immaturity etc... and in such cases it would be better to go with no watch at all IF social opinions of you in that peer group is something which concerns you. One can argue all day and night for or against the merits of such a circle with such values but they will not make them go away. In this world one gets the sense that cellphones have invaded those environs somewhat in spite of their everyman nature due to their indispensable nature. In this context it would seem that companies like Vertu have exploited a void in this world unfilled by smart phone makers to sheath that utility inside something more aesthetically pleasing (talking theory more than the actual success here... but they are still in business...). Yet to me (someone who generally cares not about these things) the difference between a phone and a watch is simple. One you wear for the world to see whether or not it is in use. The other they only see when in use. As a result one is a fashion item that is subject to all that goes with that. The other is not. I think Apple's bet is Vertu is gilding the lily.... or perhaps more accurately... putting lipstick on a pig when they try and turn a purely utilitarian and largely hidden item like a phone into a fashion item. By comparison, it could be argued that the Apple Watch Edition is about having a necessary fashion IQ for an item that requires it in order to be used by a VERY desirable class of customer. Translation: The iPhone is used across all social strata without stigma BECAUSE it is not generally viewed as an item of fashion. An Apple watch attempting to penetrate in the same manner as the phone across all strata of society is fighting an uphill battle if it does not make an attempt to make a version that wouldn't be viewed the same as a 'clip on tie' at a black tie gala event. Even a bad silk tie with a garish pattern and poorly tied is better than a polyester clip on in this sense. In a way if all Apple manages to do is start a legitimate conversation (heck make that argument) in this world as opposed to being outright dismissed they win. Why? Because if they can get their foot in the door on the fashion front I think the obvious utility may carry them through to true success in making a successful high end fashion luxury item. I think their chances of success are better than 50\50.
Now lets talk value:
Should you plunk down a minimum $350 for an iPhone accessory? Most Android wear devices and the Pebble are cheaper than this and in about 1.5 years have not managed to exactly set the world on fire. Will the Apple Watch be different? My bet is yes and Android wear is actually my answer as to why. Despite its awkward ways, an Android wear watch is useful. Surprisingly so. If an Apple Watch is only as good as the best Android wear device it will be far more successful than any single Android wear device to date, and most likely all of them combined along with Pebble to boot. That is to say I think Apple clears 2 million devices (the current combined estimate of those two worlds) before 2016 hits. Probably before 2nd quarter is over to be honest. Why? Here are the keys as I see them.
- Even with slower iOS 8 adoption Apple has a much more homogenous audience than Android and they are proven higher spenders.
- This all but assures higher adoption rates and better app eco system development. Since this can add to existing apps/eco systems there are already a myriad of ways for folks to be seduced into wrist wrapped silicon.
- Glance notifications on your wrist are better than you think if you have not yet experienced them.
- Silent wrist notifications are better than vibrating phones on desks or even in pockets in many cases.
- Even if the phone in your pocket is silent, your rustling and moving to dig it out is not.
- Glancing at your wrist has a much higher social acceptance than glancing at your phone.
- It is possible this will erode over time if folks start associating wrist glancing with phone glancing but if that happens it will only be because it has become common enough that someone looking at their wrist is no longer associated with looking at a watch 'first'. Trust me, Apple would be thrilled if this happened.
- Apple Pay biometrically authenticated on your wrist is the potential killer feature.
- I see this as a way to finally kill the password. If this works and folks trust this feature with their bank account then I suspect this could be a link to a drastic reduction in password entry needs.
- Design may prove much more stable than folks seem to think. Mass discussion seems to all assume this device will be on a phone/tablet type refresh cycle and that is probably not the way it will go
- The horsepower is on the phone. The watch just needs to be able to display stuff snappily and by all appearances it is on par with current phone level responsiveness and the last generation or two of phones really have hit the point of diminishing returns on speeding up the interface.
The reasons to not get one?
- It is possible it won't take off. If the app ecosystem development does not follow\scale at a similar rate to those of iPhone and iPad it will be a good indicator of whether or not this thing has legs.
- First generation designs from Apple have a tendency to be rougher than they look so if you can hold off from having the latest thing I'd let this get out into the wild for a month or three and see if any ugly unforeseen design flaws raise their heads.
- Apple only seems disappointed in two things and combined I think these two things will drive Apple watch early adopters to eventually want to get a new version much earlier than will be the case down the road (v2-v4 at a guess). But I do not expect this to be on a similar pace as phones.
- Battery life. Expect Apple to figure out how to drastically improve this if the watch takes off. They seem to be on the longer end of all the smart watches (other than Pebble) and the reviews will tell us if this is true or not. They have got to be betting the utility makes it worth dealing with charging. My experience with Android wear says it can make it worth dealing with.... but only just. I think long term a week is the territory a smart watch needs to get to, if not a month.
- Sensors. Several credible stories seem to prove out that Apple had to drastically scale back their plans on the sensor front as they proved tougher nuts to crack than expected. Those same sources are all also saying they didn't give up.
Bottom line?
If you are tired of the smartphone hunch and on a scale of thumb warrior and read only you slot more towards the read only end then the Apple Watch (or Android Wear/Pebble) are worth your attention and quite possibly your money. If you have callouses on your thumbs from your furious texting despite the oily glossy smooth touch screen of current phones I am not sure you will get much benefit unless you are going to talk to your watch as your phone will be out most of the time anyway.
As for me, its the first time I have been interested in personally buying a watch for any reason since I got my first smart phone. Apple Sport Space Grey (or maybe Watch) with a Milanese loop please.... and yes, from the Apple page it seems the sport/Milanese loop combination is possible. And it would be cheaper than an Apple Watch with a Milanese loop. If I go for the watch it will be for the Sapphire and if I think it has a legitimate chance of being a 5 year device. Otherwise I will wait.
Macbook Revamp:
I am really excited about this design. But unfortunately I think I am more excited about what it means for the Pro updates down the road than I am for this particular device. I smell the original Macbook Air part II. Why? Looks like you can't go to 16GB of RAM on the configuration. Combine that with the current lackluster M core chip performance folks have been seeing in all the other supper thin and lights popping out and I think 'tolerable' will be the best outcome on the performance front once these hit reviewers hands (I'll be happy if they prove me wrong mind you...). For those that want silent and portable it will be enough and it should be a success. I tend to require more oomph and it looks like that is not ready for prime time with a serious chip. The Air's have more power than this guy.... quite a bit more power.
The single USB C port thing is I think going to cause more wailing and gnashing of teeth than it merits. But, that said, until the world catches up a bit to the connector less laptop method of operating it is going to be a bit painful on the early adopter front. Be very sure before buying this thing that it will fit your work flow needs.
Those misgivings aside everything else looked stellar. The keyboard I look forward to trying. The trackpad seems a good update and the 30% reduction on the screen power consumption is staggering. All metal body, no internal moving parts, solid keys, more solid track pad. If they got the design nailed this thing could be a very long lived device. Hence my real worry about the somewhat under powered nature of the CPU/RAM options. They may have sacrificed to much to get rid of the fan/vents.
Why does this make me salivate for the MacBook Pro updates? Take the current design that gets 9 hours and add a 30% reduction in screen power needs and you probably wind up with 10 - 11 hours of battery life with no other changes. Shrink the logic board in a similar fashion and even if you still need an active cooling solution for the higher horse power it still fits in a smaller package. Bump up the battery capacity using that terracing technique at the same time you drop the logic board and display power\space usage and you may hit 15+ hours of real usage on a serious laptop. Put USB C ports on both sides of the chassis and you can now plug power in from either end YES. No more being just the width of your computer away from the nearest power source (happens to me all the time for some reason). I do give a frowny face for loosing mag-safe power connection. I'll get over if if a future Pro has USB C that can power the laptop from either side of the device.
I would like to see a fully tarted up 15" along these lines at WWDC or maybe in the fall (new display tech, new keyboard, new terraced all metal unibody with form fitting batteries, shrunken logic board even if it retains active cooling). But it is more likely it will get bumped more along the lines of the way the 13" got bumped (Flash speed bump, force trackpad). Does it get USB C though? That seems to be the way Apple thinks things are going to go. But I was kind of surprised to not see a USB C on the 13" RMBP refresh. Perhaps they are hedging their bets a bit here? Or just that they admit the professional world adoption of USB c is likely going to lag behind the more general consumer market. Who knows.
Research kit:
Everyone is talking about the Watch and Macbook announcements. But I have seen almost nobody talk about Research kit. Folks.... that is a bonefied potential game changer in the world of medical research. And, that is a big deal. Even if they do not significantly increase participation levels and they only manage to significantly increase the amount and quality of data points per participant it will represent a significant improvement. If they successfully change the scale (they are talking potentially going from 10's - 100's to MILLIONS) at which folks participate AND increase the amount and quality of data capture it could lead to major increases in both the speed and accuracy of results from medical studies. This is a foundation they are laying and it seems to align nicely with the idea that future versions of the Apple watch will introduce major advancements in health related sensors. Future headline prediction "Many decry the supposed frivolousness and waste of insurance companies now paying for Dr. proscribed Apple watches for medical monitoring". You heard it here first.
Friday, February 27, 2015
FCC and Net Neutrality: What does it mean to classify ISPs as Common Carriers
Put simply, the FCC wants to make the ISPs dumb pipes for the internet. That is they make no decisions about what a user can access. THIS. IS. A. GOOD. THING.
IF
That is what they do.
The current malestrom of debate around mostly boils down to whether or not folks believe the FCC will truly forbear (waive) the vast majority of Title II regulatory requirements in order to focus on this rather important issue.
But the internet already works, how can this be anything other than a needless government power grab, or Obama over reach etc... ???? Glad you asked.
Let us talk about some examples of poor behavior in the business practices of ISPs with regards to the internet 'working' and this concept of Network Neutrality.
Corporate E-mail vs Private e-mail:
Back in the dark days before smart phones where a 'cool' thing we had an interesting state of affairs when it came to accessing e-mail on your phone. Work e-mail was different from personal e-mail according to mobile internet providers. To access your corporate e-mail you had to sign up for a corporate data plan which was more expensive. The technical differences of accessing Corporate e-mail vs Personal e-mail was.... anyone? anyone? Nadda, zip, zilch, zero, NOTHING. Yet one required a more expensive plan to move bits round between sources and destinations of internet traffic in precisely the same way as the other. This is an example of a non-neutral internet. This is not a post FCC ruling thing folks are worried about. This was, in some cases may still be, a reality of mobile broadband service offerings.
Tethering:
Current business practice of ISPs is that tethering your phone to another device for the purpose of that device to access the internet (ie use your phone as a wifi hotspot for your laptop) requires an additional charge over you basic access charge. This means you as a user cannot decide to use your (insert bandwidth amount of your choice within the limits of your plan) how you like. You MUST pay extra to use those bits to send data to your laptop. Again, in terms of the service provided to you on by the ISP there is ZERO difference between bits that stop at your phone and bits that stop at your laptop. This is a current example of a non-neutral internet. This is not a fear of something to be added post this FCC ruling. It is a fact of life for millions of US smart phone users.
App/File Download Limits:
Ever gotten a message when purchasing an app or downloading a file where it says something like "File to big, you have to connect to WiFi"?. So if you have paid for 5GB of data and you want to download a 500MB (or 10% of the amount of bandwidth you paid for) why can you not do it? Again this is not a state of affairs people are afraid this move by the FCC will cause. It is a current reality\limitation imposed by Mobile internet providers. Again this is an example of a non-neutral internet.
Service Blocking:
Apple Facetime was not allowed on mobile networks by both Verizon and AT&T at launch. Numerous other services have faced a similar fate. Often terms of service of broadband providers has a clause about 'streaming' content. You know like Netflix video, Pandora Radio or pretty much the most common kinds of things folks use when accessing the internet. Do you know what the technical difference is between download 1GB of data and "streaming" it in terms of the ISP system? If you guessed "nothing" you would be correct. This is another example of how ISPs restrict your ability to utilize data allotments you pay for. This is an example of how we currently do NOT have a neutral internet.
Throttling of Unlimited Bandwidth plans:
Beyond an arbitrary limit it is standard practice to throttle users with unlimited plans to lower rates. That is you pay for a service that says it is unlimited and then they introduce artificial performance reduction once you past a number (5GB is a common threshold). If someone pays for an 'unlimited' plan is it fair to artificially limit their level of service because they actually use it as 'unlimited'? Balancing user load is a core function that must be managed by ISPs. When there are to many users and not enough bandwidth to go around it is ok to reduce service across the board in order to continue providing service to all users. It is not fair to single out users based on how much or how little of their plan they choose to use. This is yet another example of how we have a non-neutral internet now.
These examples all have something in common. They are cases the FCC has already been pursuing against broadband providers on the behalf of users. They are all cases at the ISP level and related to ISPs trying to differentiate between the various types of services being used by a customer instead of treating all passage of internet traffic as the same.
To boil it all down as far as I know how you have to remember a very fundamental thing. No matter what you are doing with your internet connection, in the end it is all about moving 1's and 0's from one point to another. Net Neutrality is about making sure 1's and 0's move equally regardless of what requests/services etc... are driving them.
To make matters worse in many cases the ISPs have competing services. Take Netflix vs say on demand content provided by Comcast or Comcast partners. Comcast has a vested interest in having customers paying for their on demand content vs Netflix content. There are numerous documented cases where Comcast (and other ISPs) artificially limited customer access to services like Netflix where no similar constraint was placed on access their on demand (or that of a favored provider (read Netflix competitor)). This is a classic conflict of interest where the FCC seems intent on making sure that providers are not allowed to pick which services to support. Without Common Carrier status imposed on them there is no legal reason why Comcast cannot chose to deny a particular service, or worse, sabotage them. The particular case of Netflix is one worth a post on its own as it does a good job bringing up a lot of the complexities involved in this debate. Here is a taste. If Comcast reduces service levels of Netflix traffic at the same time it increases/prioritizes traffic to its own On-Demand content who do you think the customer blames when their netflix stream doesn't work (or works poorly) and the other content runs without a hiccup? Is it fair for Comcast to not inform their customers they chose to intentionally lower Netflix traffic capacity in such a case?
IF
That is what they do.
The current malestrom of debate around mostly boils down to whether or not folks believe the FCC will truly forbear (waive) the vast majority of Title II regulatory requirements in order to focus on this rather important issue.
But the internet already works, how can this be anything other than a needless government power grab, or Obama over reach etc... ???? Glad you asked.
Let us talk about some examples of poor behavior in the business practices of ISPs with regards to the internet 'working' and this concept of Network Neutrality.
Corporate E-mail vs Private e-mail:
Back in the dark days before smart phones where a 'cool' thing we had an interesting state of affairs when it came to accessing e-mail on your phone. Work e-mail was different from personal e-mail according to mobile internet providers. To access your corporate e-mail you had to sign up for a corporate data plan which was more expensive. The technical differences of accessing Corporate e-mail vs Personal e-mail was.... anyone? anyone? Nadda, zip, zilch, zero, NOTHING. Yet one required a more expensive plan to move bits round between sources and destinations of internet traffic in precisely the same way as the other. This is an example of a non-neutral internet. This is not a post FCC ruling thing folks are worried about. This was, in some cases may still be, a reality of mobile broadband service offerings.
Tethering:
Current business practice of ISPs is that tethering your phone to another device for the purpose of that device to access the internet (ie use your phone as a wifi hotspot for your laptop) requires an additional charge over you basic access charge. This means you as a user cannot decide to use your (insert bandwidth amount of your choice within the limits of your plan) how you like. You MUST pay extra to use those bits to send data to your laptop. Again, in terms of the service provided to you on by the ISP there is ZERO difference between bits that stop at your phone and bits that stop at your laptop. This is a current example of a non-neutral internet. This is not a fear of something to be added post this FCC ruling. It is a fact of life for millions of US smart phone users.
App/File Download Limits:
Ever gotten a message when purchasing an app or downloading a file where it says something like "File to big, you have to connect to WiFi"?. So if you have paid for 5GB of data and you want to download a 500MB (or 10% of the amount of bandwidth you paid for) why can you not do it? Again this is not a state of affairs people are afraid this move by the FCC will cause. It is a current reality\limitation imposed by Mobile internet providers. Again this is an example of a non-neutral internet.
Service Blocking:
Apple Facetime was not allowed on mobile networks by both Verizon and AT&T at launch. Numerous other services have faced a similar fate. Often terms of service of broadband providers has a clause about 'streaming' content. You know like Netflix video, Pandora Radio or pretty much the most common kinds of things folks use when accessing the internet. Do you know what the technical difference is between download 1GB of data and "streaming" it in terms of the ISP system? If you guessed "nothing" you would be correct. This is another example of how ISPs restrict your ability to utilize data allotments you pay for. This is an example of how we currently do NOT have a neutral internet.
Throttling of Unlimited Bandwidth plans:
Beyond an arbitrary limit it is standard practice to throttle users with unlimited plans to lower rates. That is you pay for a service that says it is unlimited and then they introduce artificial performance reduction once you past a number (5GB is a common threshold). If someone pays for an 'unlimited' plan is it fair to artificially limit their level of service because they actually use it as 'unlimited'? Balancing user load is a core function that must be managed by ISPs. When there are to many users and not enough bandwidth to go around it is ok to reduce service across the board in order to continue providing service to all users. It is not fair to single out users based on how much or how little of their plan they choose to use. This is yet another example of how we have a non-neutral internet now.
How does this move help?
These examples all have something in common. They are cases the FCC has already been pursuing against broadband providers on the behalf of users. They are all cases at the ISP level and related to ISPs trying to differentiate between the various types of services being used by a customer instead of treating all passage of internet traffic as the same.
To boil it all down as far as I know how you have to remember a very fundamental thing. No matter what you are doing with your internet connection, in the end it is all about moving 1's and 0's from one point to another. Net Neutrality is about making sure 1's and 0's move equally regardless of what requests/services etc... are driving them.
To make matters worse in many cases the ISPs have competing services. Take Netflix vs say on demand content provided by Comcast or Comcast partners. Comcast has a vested interest in having customers paying for their on demand content vs Netflix content. There are numerous documented cases where Comcast (and other ISPs) artificially limited customer access to services like Netflix where no similar constraint was placed on access their on demand (or that of a favored provider (read Netflix competitor)). This is a classic conflict of interest where the FCC seems intent on making sure that providers are not allowed to pick which services to support. Without Common Carrier status imposed on them there is no legal reason why Comcast cannot chose to deny a particular service, or worse, sabotage them. The particular case of Netflix is one worth a post on its own as it does a good job bringing up a lot of the complexities involved in this debate. Here is a taste. If Comcast reduces service levels of Netflix traffic at the same time it increases/prioritizes traffic to its own On-Demand content who do you think the customer blames when their netflix stream doesn't work (or works poorly) and the other content runs without a hiccup? Is it fair for Comcast to not inform their customers they chose to intentionally lower Netflix traffic capacity in such a case?
Why do they think they have to regulate it now?
The FCC has been trying to enforce the idea of a neutral net for many years. Had the ISPs made moves and changed practices to uphold a neutral internet service it is likely the FCC would never have passed a rules change to re-classify them as title II common carriers in order to have the necessary legal powers to force them to behave. Put another way, for about a decade now the FCC has been counting to three like a parent trying to get a misbehaving child to correct themselves before more drastic action is required. They didn't, and the FCC reached 3.
Don't get me wrong. I am not a fool to think more government intervention is a good thing. I am by no means thinking the FCC should be given a blank cheque on this issues. So long as they stick to what they have already been trying to enforce and do not start making Chinese government (or UK for that matter) type of noises about internet content I think this is a good move. I look forward to the release of the full draft of changes so that a more educated debate of its merits (or flaws) can be had.
Tuesday, February 24, 2015
T'Rex Update: Batteries, Brains and Wires Oh My
Motivation: So I have finally gotten the Arduino to talk to the Orion and the Raspberry Pi. For a long time I was banging my head against a poor documentation issue. The Roboclaw examples all show a setup that only sends two parameters when configuring the comm to the controller. However an error throws when you match the example saying it expects 4 parameters. I couldn't figure out what the additional parameters were being used for going through the .h file so I tossed in a couple of values. Turns out the 3rd parameters was the timeout and setting it to 0 means the damn thing never has a chance to listen to any commands. It also looks like if you use the RoboClaw setup it only uses the UART Tx Rx pins (0 and 1) on the Arduino. Fun times. Right now I have a bastardized solution where the USB serial from the Pi and the RoboClaw comm is happening on these pins. This means I can sit on the terminal window (or serial monitor in the Arduino IDE) on the Pi and send commands to turn the treads. As I am connecting to the Pi via a VNC remote desktop session from my Mac this represents a remote control solution via wireless. Eventually I will probably ditch the VNC portion and build a mac (or possibly iPad) application that receives telemetry rather than using a full on desktop session.
I also have the camera streaming data directly to the mac via netcat that I finally got working (Thanks to This example). Back to motion. I built in a function that takes a power input and then steps to it based on an interval (currently 100 milliseconds) This makes for much smoother speed transitions. If the power number is higher than current it ramps up and down if it is lower. This is also setup doing a clock watch rather than delay so it does not interfere with doing other tasks. Right now I only have it doing forward so I still need to build in similar functions for reverse and the turns. If you are interested in doing the clock watch solution as opposed to using Delay then check out the Arduino multitasking lesson on adafruit.com. At heart it is just watching the return value from millis() and setting up a couple of variables to watch the gap since the last time you checked. When the delta is greater than your desired interval, do something. Pretty easy to do and I wish most of the examples out there for arduino used this methodology rather than the rampant delay usage you see out there. The end of that lesson also starts you down the path of classes which is also a good thing to learn here. It isn't that the Arduino is multi tasking, it is just that it can run through a LOT of instructions if you let it. 16Mhz is FAST. Each clock cycle is 1/16,000,000ths of a second. A 1000ms delay wastes 16,000,000 clock cycles that you could be doing something useful.
Anyway, If you saw my Facebook post (ITS ALIVE) you may have noticed the tracks were not in sync. Out of a valid range of 1-127 it was taking till 35 to get both treads moving, one started around 30. I had to take them apart and figured out one of the bogies was pretty gummed up from my RC track initial run. I ended up using some moly dry lube that I have for my road bike to lube up all the bogies, drive wheel and gearboxes. The result was that I can now get consistent tread rotation at around 12 and they now seem in sync.
In addition to motivation I also finally got my pan/tilt servos situated/calibrated to center and wrote a couple of routines for moving them through their respective 180 degree arcs of motion also from the serial connection from the pi. Next... need to figure out how to mount the camera, ping sensor and maybe the microphone as a pan/tiltable sensor head. The Arduino sketch is up to 10k of 32... well really 28k available (using a trinket from Adadfruit in the final build). Should be able to handle the rest of the movement and sensors in that.
Another bone head lesson (and why I am overloaded on serial comm via the 1,0 pins) was that BMSerial.h and softwareserial.h were conflicting.... because they do the same damn thing. If anyone else runs across this trying to setup a separate serial line than the one being used to communicate with the Orion Roboclaw motor controller then you just need to make a second serial setup with BMserial... the example is always myserial(tx,rx), just make myserial2(tx,rx) or whatever name you want to use. Or if you are using the roboclaw (tx,rx, timeout, ?) call you can do an independent BMserial creation, just don't use the 1,0 pins. Cleaning that up and implementing the multiple serial comm lines is probably the next thing on my to do list.
Power: I learned a lot about power on this thing recently... almost fried my UNO board in the process. I found a couple of projects online showing how to do a voltage divider circuit that let me monitor the cell voltages of the lipo battery I was using via the balancer connector. This way I could not just rely on the RoboClaw shutoff or a lipo monitor widget, I could actually send back the pack, and individual cell voltages via telemetry. Well this worked great for one lipo pack. I am planning to run this via two 2S 7.4 5000Mah packs wired in series (14-16v or so). So after I had it working for one pack I wrote the routines for monitoring two packs at once then wired (make that tried) the second pack in. Well I found out the hard way that when wiring the second ground from the balancer connection to the common ground I created a mismatched parallel configuration.... *sparks, pop, SHIT*. Basically this created a wiring equivalent of division by zero where infinite current tries to flow between the two batteries (and through anything they are wired through) to get them to match. This also blew a hole in my theory that the balancer wiring was protected from full current from the batteries... um NOT. Anyway it seems I only did some minor damage to the UNO board. It now reads a bit high on the analog inputs but so far so good.
So after getting a clean pair of shorts (2S 30C Lipo's on a mission to discharge are no joke) and doing some belated googling I learned a couple of important things. The first was that bit about the stupidity of the wiring I tried. The second is that RC LiPo's are more dangerous than I was already aware as they typically lack any kind of built in protections for the lipo cells. This lets them pack the punch they do for those little remote controlled rockets. But it also puts them at a much higher risk of a failure scenario where fire or worse happens. In the voice of Tom Hanks in Polar Express " Lesson.... Learned". I hadn't been planning to try and do this. I ordered a couple of voltage monitors that beep when the cells get low as plan A. However the great deal I got meant waiting on an order and I got bored waiting for them to show up ad thus plan B was born. Anyway, I do still want to try and figure out how to do this. I am sure I could do it with a relay but that is probably overkill. For now I will probably be happy with just figuring out how to get the pack voltage from the two way packet mode from the Orion RoboClaw controller.
Brains: Move over Raspberry Pi B+ and make room for Raspberry Pi 2. Got one through Adafruit for I think 40ish something + shipping. It will be nice once the launch fuss settles down and you can reliably get these for the advertised 35$ price. I may load up everything on the B+ once I button it all up. But the Pi 2 has really been able to speed up all the Arduino coding I am doing on board the Pi. I think my end configuration will be using one of the HAT prototyping boards from Adafruit with a pro trinket mounted. haven't decided yet if the trinket will be the 3.3v (direct comm to Pi) or the 5v version (will require a logic level shifter which I already have).
Wednesday, February 18, 2015
Raspberry Pi 2: Tomorrow has come while no-one was looking
The future is here. Is it the Apple Watch? Apple Car? Flying Car? Google Glass? Oculus Rift? Hololens? Project Loon? Google Fiber? Tesla Model 3, Hyperloop, SpaceX Falcon Heavy with resusab reusable boosters?
No.... it is the Raspberry Pi 2.
Surely I am being over sensational. Surely this is an overstatement of the importance of a not-for profit educational geek toy that even now most people probably have not heard of even though it may well be the most successful computer ever made in the UK, or if they have heard of it, they have no real idea just what the heck a raspberry pi is.
Ok, perhaps it is. However, I ask you to bear with me me for a minute as I explain. The Raspberry Pi 2 and its kin like Intel Edison, Beaglebone etc... are all in this very interesting space. Truly inexpensive computing. The Raspberry Pi 2 is the first one 'across the line' so to speak in my opinion.
No.... it is the Raspberry Pi 2.
Surely I am being over sensational. Surely this is an overstatement of the importance of a not-for profit educational geek toy that even now most people probably have not heard of even though it may well be the most successful computer ever made in the UK, or if they have heard of it, they have no real idea just what the heck a raspberry pi is.
Ok, perhaps it is. However, I ask you to bear with me me for a minute as I explain. The Raspberry Pi 2 and its kin like Intel Edison, Beaglebone etc... are all in this very interesting space. Truly inexpensive computing. The Raspberry Pi 2 is the first one 'across the line' so to speak in my opinion.
- $35 - one computer board with 40 GPIO pins, quad core 900mhz Arm 7 chip, HDMI input, 4 USB, Wired Ethernet port, 1GB of RAM.
- $20 - one 16GB top of the line SD card, you don't need to speed more than $10
- $10 - wireless USB adaptor
- $10 - Pi Case
- $6* - Amazon basics HDMI cable
- $23* - Amazon Basics wireless keyboard and mouse (14 for wired)
- $150* - Basic HD Monitor
- $10* - 2amp 5volt usb micro power supply
*lots of folks have these laying around from older systems/phones/tablets etc...
Total cost if you have none of the * - $264
Total cost if you have the * stuff already or plan to use it "headless" - $75
In 1993 my mother took out a loan against our savings account to buy a 486 desktop computer. I am talking around $2000 of VGA and CD ROM goodness. In today's money that would be a computer costing over $3,000 dollars. 12x the cost of the pi system I laid out above.
A 66mhz 486 was a 16bit system with in theory 66million instructions per second and boasted a 'massive' 32MB of RAM if memory doesn't fail me. Graphics were impressive at VGA (640x480).
The Pi 2 clocks 900Mhz on 4 cores. Simple math says 900 million instructions * 4, or 3.2 billion 32bit instructions per second. Granted multi core performance is a much more nuanced issue. but hey lets go on the conservative side and say 1billion 32bit instructions per second vs 66 million instructions at 16 bits. That is 32billion bits being pushed per second vs ~1 billion.
1/12th the cost. 32:1 performance increase in terms of a rough bit pushing metric. That is before you go into things like increased storage (16GB was LOT of space in 93, 1GB of RAM was just silly talk), graphics capability, wireless communication and power usage. Or built in software like wolfram alpha, mathmatica (worth more than this whole system if bought independently) and built in development tools with an ease of use and freely accessible tutorial information that was simply unthinkable in 93.
But hey, super powered computing in a small inexpensive packages are not really new. Afterall, odds are good if you are reading this you have a smart phone, possibly you have had several. If it is a current top of the line phone it is quite a bit more powerful than this board I am describing. It isn't just the low cost and computing power that makes this board (and others like it) special. It is those 40GPIO pins, camera connector, USB ports, display connector (HDMI and another more special purpose one). Why are those so important that I would make such a grandiose s statement above?
Let us fast forward a few years to 1999. I am in college studying computer science and I take a class in what I remember as physical computing (can't remember the actual title). That is taking these abstract calculators and using them to do real things in the real world like recognise a picture, or turn a motor, read a button push etc... To say I was interested is a mild understatement. But I was truly disappointed to discover how freaking expensive and yet crude it all was. You could spend as much on controller boards and interface cables as you could on the computer. And if you were not very careful you could easily fry your multi-thousand dollar system. The main component we used that semester was a board that amounted to a microprocessor tied to a serial port for the bargain price of $500 (you can get a better version in an Arduino Uno for less than a 10th of that today). $500 dollars + say a $500 old used 386/486 system plus periphials (keyboard, mouse etc...) was a lot of money for someone to throw at an adventure that could very easily lead to letting the 'magic smoke' out of all the expensive electronics. In some ways the cost was good as it taught us all caution. Looking back on it I think it was the worst element of the class, as that same experience with painful experimentation cost of any physical computing interface was occurring all across the nation, hell the world, at that same time.
That
Has
Completely
Changed
Granted this is not a crisp line. This change has been in work for a few years. Physical computer experimentation via cheap SBCs like the Pi and microcontrollers like Basic Stamp, Arduino etc... has been steadily ramping up on what looks to be the early part of an exponential curve collectively being referred to as the Internet of Things or IOT. I mark this spot as the point where the curve starts really taking off where before long people will be aware of technology like the raspberry pi or similar technology just as they are smart-phones. I point to the Raspberry Pi 2 as the IOT equivalent of the original iPhone in the smart phone world. Makerbots replicator is probably the IOTs Apple 1/Macintosh. Basic Stamp and Arduino both fit in the picture as prominent players as well. Remember Apple didn't invent the smart phone or the personal computer, it got the world to fall in love with them and start buying them in mass.
Why do I think the Pi 2 is so special?
Because the Pi 2 marries both practical understood computing and the IOT. In doing so it transects the world most of us live in where a computer has to work, and the burgeoning maker\IOT movement where a computer has to interact via more than just a keyboard\mouse\touchscreen and internet connection. The previous Pi B+ board that came out last year was a marvellous headless linux system for things. It was a curiosity as a computer you would use in general. Running the x11 desktop was akin to running quake on (insert some random highly inappropriate hardware) just to show that you could. You could do it, you could sort of run Minecraft, dev tools, a web browser or Mathmatica etc.... But it was a masochistic exercise in patience to do anything resembling real work. Compared to that... I am writing this post in a browser on my Pi 2. I did all my research in additional tabs. I have multiple Arduino development windows up in the background. TOP is running in one of several SSH sessions logged into the box and utilization is rarely peaking above about 20% and load times are a mild annoyance similar to web browsing circa 2000->2002 or so vs impossible obstacle. In other words, It. Is. A. Computer. And unlike most computers it is designed from the outset to be friendly to interfacing with the real world rather than just cyberspace. Now you can access all that wonderful educational content directly on the device while working with it. With a Pi and an Arduino you have the IOT equivalent of Duct Tape and Bailing wire. You could in theory take $1000 in stuff and create the next big IOT thing. Sell it to the world via Kickstarter and be a massive successful company in almost no time at all relative to history if you have the right idea and present it the right way. That is an investment of 1/3 of what my Mother had to spend to get us a working early internet era system. I am pretty confident that someone is out there doing it right now and in 5-10 years time you will think of them like you think of Jobs, Gates, Musk etc... and when you read their history you will read how they got started with this kind of device. Guess I will have to check back in on this post in a few years and see if I was nuts or not :-)
Subscribe to:
Posts (Atom)