Sunday, March 22, 2015

Review: Amazon Echo - my new idiot toy

So as I mentioned a while back I ordered one of Amazon's Echo speakers. It came a bit earlier than expected and has been sitting around the house for a couple of weeks now and as the title might suggest... it has some issues.

The Good:

I am not sure what the gripes are regarding the speaker quality. For a single point speaker it is plenty adequate. It isn't a super mega woofer setup that will blow the clothes off any careless passerby or anything. But it gets loud enough, has reasonable bass levels for the size and is generally free of distortions. A home entertainment system replacement it is not. So long as you keep things in perspective this unit will not disappoint. That said, seeing as one of its most useful features currently it playing your music collection, or prime music it is at best average if you are talking the higher end Bluetooth speaker systems like the Jawbone Jambox etc...

I like the design. Just a simple black cylinder with a blue halo that lights up when it detects an interaction attempt.

Voice recognition over its own noise. Alexa is pretty smart at listening to you if the only things going on are the Echo making noise and you trying to talk to it. Introduce a third party sound source be it a TV or others in the room and well... it works about like you would expect if you have ever tried using audio interfaces in dirty noise environments. I had expected this to be a bit better based on all the discussions about the multiple microphones. But more on that below. Something that takes getting used to is the fact that you do not need to wait for a prompt and in fact if you do it can often lead to the audio prompt ending prematurely as Alexa seems to quickly stop listening if input is not detected. Basically you have to trust that when you clearly say "Alexa" (or "Amazon", the only other voice prompt available) that you naturally continue just as you would when talking to someone you assume will pay attention once they hear their name. While there are some caveats I have to say that generally speaking I was pretty impressed with the recognition ability of the voice recognition... and that was without any training. Something I have been meaning to take the time to do.

Audio weather status is nice and seems to work reliably.

Daily status briefing is a good idea, I haven't really used it much though.

The So So:

Music control is... well lets just say a bit rough around the edges. For example, by default any music played from your library for say a particular artist is played in a shuffle order. In order to get it to play in order you need to request it play the album. Might seem simple enough but it is pretty common for bands to have self titled albums. IE an album that is the same as their name. Alexa does not understand this concept and it just so happened I had a couple of albums with this issue I kept trying to get to play in order. I had to resort to the application.... speaking of which...

The app is not good, it is not bad. In fact I'd have to say it is by far the most effective way to interact with Alexa and that is the problem.

Ordering from Echo. Yes you can do it. Yes I did it once. Not real sure I like the idea with a soon to be more astute young person around. One click is dangerous enough, voice ordering could be very bad.

Timers. One of the most useful functions of Siri, Cortana or Google voice interfaces and always there if you are within shouting distance. The problem? Unlike say a timer\reminder you set on a phone there is no way to add context to the alarm. Something I tried to do was to set an alarm with the following phrase " remind (name) that it is time to go to bed at 8:00 ".  This set an alarm for 8:00 but there was no information provided as to why the alarm was going off. This is something you can do with SIRI and the info you provided will pop up on screen with the notification/alarm.

The Bad:

Inability to link music from local stores is a shame. You should not have to store music on amazon to utilize Alexa's music streaming ability. Amazon is about as bad as Apple used to be in the early days of itunes for ignoring the possibility that someone may have music not curated by their system. Considering fire devices and android devices can link to lots of music services it is a shame that the Echo does not take advantage of more than a couple of internet radio solutions beyond Amazon's own Prime service.

The voice recognition with multiple sound sources. So... I have to say the idea that the Echo had multiple Mics seemed to indicate it could possibly do a neat trick in terms of paying attention to voice commands coming from a specific direction and ignoring sources of potential commands emanating from a location different from the one that initiated the attention phrase. Ummmm..... not the case. TV behind the unit, people off to the side and very separated spatially all confuse the crap out of the Echo if they are making noise. Since the cancelation of any noise it is making seems to work I am thinking this thing REALLY needs to be able to interact with the fire TV. It would be nice if it could pull the same trick off when the Fire TV was your TV sound source. For the issue of multiple folks talking they really need to sort out a way to do some more sophisticated echo location of the source of the command phrase and or separation of various vocal signatures to follow the audio coming from a specific source in a dirty noise environment. I have some hope this is something that can be added via software in the future provided the microphones are already suitable for doing accurate triangulation of the origin of multiple audio sources. Why did I think they had done more on this front? Because the device really is voice prompt controlled first. A truly disappointing experience was being told there were actions that could only be achieved via the application. Voice prompt first indeed.

I can create a todo list. I can add to the grocery list. But I can't delete things. That takes the app. In fact with the exception of "stop" for any current activity it seems Alexa does not do very well at all with negatives. For example I have been using Alexa to play "classical" music. This frustratingly causes it to randomly interject tracks from the Phantom of the Opera album I have in my library and I can't figure out a way to tell it Phantom of the Opera is not "classical" music. Saying something like "Stop playing Phantom of the Opera" gets me "playing tracks from Phantom of the Opera". Perhaps if I go into the App I can re-classify the album.

Also, continuing on the subject of classical music, unless you are a true initiate of the subject matter it is damn near impossible to get a specific track to play. For example you cannot ask for just "Play Brandenburg Concerto #5". You have to know something like "Brandenburg Concerto #5 in b flat minor by (some performance group/artist etc...)". Having such a long string almost ensures there will be some misunderstanding in the voice recognition process or you will remember the wrong key for the artist you are selecting or some such.

The lack of integration with the Fire TV. Really, this seems to be a no brainer to me. The Echo should provide an extended interface with a Fire TV.

Conclusion:

I only ordered it because as a Prime subscriber I could get it for 100$ and I figured that if nothing else it would be worthwhile as a bluetooth speaker and at this point, that is about where I am with it. It certainly is not worth the full asking price of $200 as in that range there are definitely better speakers available. Does Alexa make it worth more? As yet I'd say no. I think it would be worth more if it had a big honking battery stuffed into it and lasted forever as a bluetooth speaker when not plugged in. But, as I pointed out in my original post on the subject, Improving the voice interface capabilities is most likely not hardware dependent. I have a growing suspicion that the coming Apple TV upgrade that will very likely add SIRI and some additional HomeKit integration is going to give Alexa a rather uncomfortable spanking. I hope Amazon is working on a way to link up Alexa and Fire TV otherwise I have a rather awkward $100 bluetooth speaker on my hand that occasionally thinks I am talking to it. It is probably a good thing Amazon doesn't allow users to re-define the voice prompt as I would probably choose something very derogatory at the moment.

Virtuaugmentality: The coming virtual and augmented reality revolution?

Oculus Rift brought to you by Facebook. Hololens brought to you by Microsoft. Magic leap brought to you by Google. Gear VR brought to you by Samsung powered by Oculus Rift bankrolled by Facebook. Morpheus brought to you by Sony.

This list goes on, but the odds are rather good that in that list lies technology that will be rather large in the next 8-24 months. What is the big deal? If you are of my generation or older you may have a better perspective on this than younger folks accustomed to high level graphics on computers. The last time we saw this possible level of shift in computing display technology was either when the computer made it to the TV screen, or when it graduated from green text to graphical user interfaces. If you think back to the original Macintosh with its black and white tiny screen GUI based mac OS using a mouse it was far from the first time such technology had been built. The big deal was making it mainstream. Going back to Xerox PARC such interfaces had been toyed with for quite some time. This is also the case with virtual and augmented reality. Academic and other research endeavors have been pursuing the concepts of virtual and augmented reality displays for decades. The problem has been finding a suite of technology that was affordable enough, and designs that were polished enough to try and take it main stream. Well it seems we are about to cross that threshold.

A sampling of stories (there are thousands more...)

Are there hurdles remaining? Yes. Magic Leap sounds like it is still tethered to a work station for one thing. But Microsoft seems to be using similar light field technology and they appear close to a release. They have said Hololens will be out with Windows 10. At face value that would seem to mean later on in the summer of this year (2015). Digging a bit deeper it sounds like 2015 will be the year of the Dev kit for Microsoft and 2016, along with a lot of the other players mentioned above, will be the year of the consumer onslaught for this new tech. Oculus Crescent bay (currently demo only) seems to have passed the major milestones set forth by founder Palmer Luckey. Frame rates are above 90hz and delay is at or under 5ms. 

I am not sure how fast this will move. Games will take a quick lead as Oculus, Samsung Gear VR and Sony's Morpheus venture into the desktop, mobile and console world of immersive gaming. The attractiveness of this interface is hard to deny for Racing Games, Flight Simulators and probably for other geners. But work will not be to far behind. Microsoft has BIG plans for Hololens. If you buy what Nedella is selling in that Wired story they truly do see this as the next step in computing. 

3d has been an on again off again fad since the 70's use of those wonderful blue and red glasses. It seems that turning single 2d screens into 3d displays is forever destined to be a niche capability. But driving stereoscopic 2d screens is about to have its chance to shine... and on its tails is the concept of "Light Field" displays. The difference is fairly simple in concept... but the details are mind bending. The easier of the two is what Oculus is doing. Imagine if you will a typical 3d rendered scene. IE the computer has a 3d coordinate system driving the elements on the screen and mathematically it understands these things to exist in a 3d coordinate system. The render to the screen however is a 2d image from a single perspective. The ultimate limitation here is that with a static shot there can be no parallax (visual effect of two eyes that allows us to perceive depth). However, if you use two individual 3d screens (or two independently viewed images on a single screen) that are rendered as if from two offset points of view (such as the distance between your eyes) and then show each image to an independent eye.... then voila. Your brain now thinks it is seeing a 3 dimensional scene. The rub is that the screen you are looking at is a static distance away. So the physical focus level of your eyes is fixed while the content itself appears to have depth. The biggest issue here is the lack of ability of the user to focus in that field by choice. IE look outside and everything is not in focus, only what you choose to fix on is. But in the case of the rendered screen it is possible for the entire depth to be 'in focus'. Now rendering engines can provide some tricks in this area but it is not the same as your eyes physically altering their shape to bring elements at different distances into focus. 

Light field technology on the other hand is a bit different and I'll be damned if I understand all of it. But the bottom line appears to be that it literally recreates the effect of light reaching your eyes from distant objects such that YOU provide the focus and your eye shifts its focus naturally. An early implementation of the premise of this technology was the Lytro camera. While it wasn't a 'display' tech, it was the first commercial product I am aware of that used the concept of "light fields" where imagery was concerned. Light Field displays are essentially working to provide in real time the information that Lytro cameras capture. In the current Lytro model you bring up the information and it displays a chosen depth of focus controlled by the user interface. IE you can re-focus the image but you are always viewing the image in a none light field based display. If I understand this right then using a light field display like Hololens and Magic Leap would allow you to look at an image from a lytro camera and your EYES would provide the means of shifting your depth of field focus rather than a mouse or keyboard command. 

This has a few implications. One... it is possible for example to take into account someones vision correction issues and present a light field to them that their brain will then see as 'in focus'. IE dispense with the need for glasses. In effect you apply your prescription information to the display you use rather than have an image presented that you have to use your glasses to see clearly. The simpler technology used by Oculus requires you to keep your glasses so far, though it would seem if you could plug in prescription based lenses used in those headsets in place of the 'standard' lenses used that assume folks using it do not need vision correction. In one case sharing your display could require physically swapping out lenses (or wearing glasses with it) and in the other it could involved switching software settings (say a phone based NFC exchange). Also, because with this tech your eyes are changing focal points it could alleviate vision stress related to maintaining a fixed focal range for extended periods of time. The bane of heavy readers and computer users for years.

So, you may be asking yourself... its all fine and well that this helps NASA explore mars, what exactly can it do for me? Ok, time for a trip into imagination land. 

For that I think you have to look pretty closely at the available demo's and sprinkle a little bit of imagination. I for one see a lot of promise in a couple of elements of Microsoft's Hololens demo clip where they show a couple of interesting concepts. First is the notion of pinning a video to a physical location in your house. This could suggest that there will no longer be any need for multiple screens located around the house. This is something already not as common due to mobile devices but this could take that concept to "11". Instead of having to look at a phone you will just be able to designate an empty section of wall to display content on and then if you are looking at it, it is there but if you look elsewhere you may only hear it. More than just the cool factor of having a 'virtual screen' managed by magic in your 'augmented reality' headset this is a good example of the notion of how this technology could in theory obsolete the concept of a traditional monitor. If you are familiar with the concept of multiple desktops then this may seem an intuitive leap into the future. If not you are hopefully in for a treat. Multiple desktops allow you to have lots of windows open and organized across multiple screens without having to have multiple physical screens. Of course you can only look at one physical screen at a time. Switching back and forth between these virtual desktops is not a natural skill. In fact it is something I see as akin to GUI equivalent of the command line. This is because the basic concept of a GUI is that you SEE all of your interface elements where in a command line you often have to "know" them. There are no menu's unless you know the command to list the menu for example. Virtual desktops often require you to know they are there in order to know you would want to navigate between them. Now.... imagine if expanding your desktop were as easy as turning your head, your monitor moves with your gaze and reveals content you have arranged to the left or right. That is something this augmented reality concept Hololens seems to make possible. You could line up a number of video feeds and/or application windows  and just turn your head to choose between them, your focus for sound driven by the combination of your head and eye angles.

The second element that changes things would appear to be new ways of manipulating 3d content. A computer mouse is a cartesian coordinate based device. You move in x and y axis only and all interactions on the screen are based on this fact. IE whether moving a mouse, rotating an object you are only sending a best two pieces of information and X and Y coordinate change. Even for using this to navigate an x and y axis based display (ie 2d GUI desktop) has proven to be a somewhat unnatural interface that has to be learned. Ever tried to help someone learn to use a mouse for the first time? There is not much innate about it, it is a learned skill that requires the user to equate movement of one object to a variety of possible movements and actions on another element. Compare this to introducing someone to a touch screen tablet.  When you then add a z axis to the display that you are trying to manipulate things just get worse. This is the bane of CAD software and any 3d based computer design software. The idea of reaching your hand out to interact with a computer generated object just as you would a physical object is potentially huge. This is taking the idea of a touch screen and adding a z axis to it so that you can literally reach into the scene. Why is this big? Look at a group of toddlers. They can with very few exceptions, pick up and place blocks to build physical objects. Add magnets and they quickly learn new ways in which they can build things, make them click together (KNEX, Lego, Duplo etc...) and they learn that too. Now, I challenge you to go get a random lego kit and build it, then try and design the same 3d form in ANY computer program. What a toddler or any average adult can do with almost no training is a painful process on a computer. Now go watch the Microsoft Hololens reveal and I think you will get a feel for how that may be about to change. You pick something up by reaching out and 'grabbing' and then rotate/tilt etc... just by moving your hand as you would a real object. Just by adding that natural ability to manipulate in 3d you open up some as yet unreachable computing capabilities for the masses. 

The next thing that excites me is the idea that image processing is reaching the point it can actually interpret the world around in somewhat they same fashion that we do in terms of determining the 3d structure of a given area. The concept that a head mounted unit can scan in real time your room and correctly animate a Minecraft session, or character to jump up on your furniture in a realistic manner? That has implications far far far beyond just augmented reality. That means robots can navigate better for cheaper once this kind of tech encounters the kind of economy of scale that is only possible with mass adoption. Am I speaking greek? How about this, this is the same kind of tech that makes a driverless car possible. If this is about to launch in packages that can fit on your head and run on batteries for significant amounts of time, the level of tech running on autonomous cars probably reached and or passed this quite a while back, as they have far less size/power constraints than a personal device like these.

FPV control of devices is going to be a whole new ballgame once you combine the concept of light field photography with light field displays. Take Hololens and its cameras, mount them on a quad copter, wave a magic wand on the data link needed to support this in real time and have fun. Wow. 

On the more salacious side... the pornography industry is going to have a field day with this stuff. Keeping it more PG-13 than X, think of say the average 'strip club' scene in your average cop drama. Now imagine being able to actually walk around in that scene yourself. Web cam performers providing adult 'shows' may well end up being the first in line for the technology to create a live 3d scene. On the more G end of the scale... imagine being able to have your WOW pet following you around in real time.... kind of like taking tamagochi to the nth degree. 

Taking the magic leap demo to an extreme case. Take the current concept of a LARP and then put one of these headsets on everyone's head that is involved and drive the experience from a shared world that overlays an appropriate physical location. Or in other words... Don't just bring out your WOW pet to follow you around virtually. You could in theory use this to go into a WOW like world. Obviously worlds in which everyone is wearing this kind of headset, or something that could be superimposed over it would work best :-). Not wanting to go to imaginary places? How about "visiting" real places? Take a large open space, mix with high resolution LIDAR scans of a location and you can now walk around "anywhere". There are some obvious issues when various topographies come into play. IE how would you say climb the steps of the colosseum while walking around on a flat plane? Speaking of the colosseum or any similar set of ruins. How about if those computer generated re-constructions could be overlaid on the real site as you walked around? See the Acropolis in Athens as it is today, and as it was believed to be in the past. See scans of the real freizes stored in the British museum in place on the exterior of the parthenon. See a limestone clad pyramid of Giza with real time depiction of sunlight interacting with the 3d model. See the gun crew of the USS Constitution in action when touring the gun deck. See the battle of Gettysburg when visiting the site. See the Normady invasion when walking the beaches. See Dr. Martin Luther King's "I Have a Dream" speech overlaid on the present day DC Mall. See the battle of Helms Deep, or the battle of the Five armies unfold when visiting the filming locations in New Zealand for Lord of the Rings. See a re-creation of (insert significant moment of choice at sports arena of choice) when visiting a major stadium. See the skeleton of a Tyrannosaurus overlaid with a depiction of a living breathing creature while standing next to it.

These are all things that can and in many cases have be done today with a computer. What a display sitting in front of you lacks is the sensation of being in the middle of whatever you are looking at. This is a missing dimension. And adding it will be a big deal if it is successful. 

I freely admit that a lot of the above is unabashedly optimistic thinking on my part. I will leave the in depth exploration of the flaws to those who like that sort of things. I am well aware there are still problems to overcome. That isn't the point. I think the point is that a tipping point seems to have been reached where the problems remaining do not outweigh the benefits of commercially exploiting what is already possible. As for what the biggest hurdles remaining are? Even if Microsoft has already successfully crammed all the needed horsepower into the package they are touting for Hololens this is not exactly a set of Ray Bans (or choose stylish headgear of choice) sitting on your head. While it and Oculus are far better than the motorcycle helmet dwarfing abominations of 90's VR world this is still something not expected to be a constant companion ala a smart phone. The Oculus Rift and similar ilk are even worse here than the augmented concepts as they wall you off entirely from reality. IE these are not things you will use out in the wild. Most of the cases shown in demo clips and the ones I talk about above are fairly location dependent. IE you might wear one around the house or at a specific kind of historic venue or something similar. However, you probably are not wearing one of these down to the bar/pub. That said it isn't inconceivable you might break it out at the coffee shop in place of a laptop. So this technology is at an odd mid point between say a smart phone and a desktop computer. I suppose the holy grail here would be if this tech could fit in a much smaller package. The true sci-fi level of this would be something that could work via contact lenses or perhaps through some kind of bio implant. To far fetched? Well I for one think we might as well start figuring out what is over the horizon now because it seems this tech is no longer over the horizon but is instead well in sight and rapidly approaching every day life. 

Update: T'Rex Rover and Orion Robotics 15a RoboClaw Review

All running off the battery... FINALLY. Still a lot of work to go
So I have been plugging away on the rover. I got in some new things for Adafruit so that now I think I have everything I need in order to build version 1 of a complete rover controlled by WiFi. At least on the hardware front. The work is now moving more into the software side of things. ANyway, here is what is working so far:


  1. Pan and Tilt:
Ping style sensor temp mounted. Base is velcro but it proved pretty 
unstable while the servos were moving so I added the clothespins while testing
Will need to figure out a better mounting solution...

  1. The distance sensor is.... sorta working. Not sure if it is a bad sensor (cheap sainsmart unit) or bad programmer at the moment. It is mounted by rubber band which I need to sort out at some point. 
  2. Camera is throwing a stream up (that is the window on the monitor in the first picture) 
  3. Arduino UNI is talking to the RoboClaw motor controller and I have a serial command solution for fwd/bkwd and pan/tilt control
    1. back is negative 1-127
      1. command takes a positive number, fun part was figuring out how to split a ramp that went from front to back or back to front...ie if I was going forward and wanted to go backwards I didn't want it to just throw over to the backwards speed. It slows down forward motion to zero before increasing the backward rate to the requested setting. 
    2. forward is positive 1-127
    3. 128 - 307 controls pan
    4. 308 - 487 controls tilt
  4. Reading values from the RoboClaw (more on this in a bit) so I can figure out encoding for more accurate motor/movement control. Right now it is just giving me the main battery pack voltage. 
  5. Multiple Serial lines are up and confirmed, this stopped some cross talk I was getting between controlling the motors and the Arduino sending data to the Pi. 
  6. The big news: It is now working while self contained. Batteries are connected to the motor controller, it has a 5v 3a circuit that I built a franken USB cable to use to power the pi. Still using the UNO so it is powered via USB from the pi for now. Intend to move to the much smaller Trinket pro and power it from the 5v pass through pin from the Pi. 
A picture of the growing rats nest


What is left:
  • Still need to work on a serial line between the Arduino and Pi GPIO pins. Now that I have the multiple serial lines setup that should be easy
    • Means I also need to spin up a serial connection program on the pi instead of using the command line or the Arduino IDE serial interface Ultimate goal there will be to connect to the remote system (Macbook initially, iPad, Android apps eventually). 
  • Decide between 5v trinket pro and 3.3v trinket pro
    • 5v will require logic shift to talk to the pi, definitely doesn't require logic shift to talk to the servo's, sensors and motor controller. You can see this setup below on my new pi Hat, trinket is to the right, logic shifter is to the left, camera ribbon goes between them. This is just mocked up right now. Still have to transfer all the wiring from the uno to this board if I can make it all fit. If you zoom in you can see my learning process on the solders. The hat is a bit scorched in places. No idea yet if I over did it or not. Same on the trinket and shifter... 
    • New enclosure, pi hat, wi fi (increased range I hope), and camera housing.  4 line logic level shifter and 5v trinket are sorta mounted

    • 3.3v doesn't require logic shift to the pi but may require a shift for the motor controller or sensors. I have a 4 and 8 line shifter if needed. 
      • You can see the scorch marks... as homer simpson might say... "DOUGHT !!!"
      • If the cabling for the above doesn't work right I may go to the 3.3v trinket to simplify the wiring coming out of the hat. Supposedly the Roboclaw outputs are 3 volt logic tolerant.... so I shouldn't need the shifter for it. Not sure about the ping sensor. Think it needs 5v... 
  • Sort out some servo freak outs. Getting some random servo jerks when sending serial in and out. Guessing I need to pull the pin low with a resistor or something or maybe add a cap or two. Right now the signal wire is plugged straight into the Arduino so I suspect the pin is floating and getting pulled high here and there. Also possible I fried something on the Arduino so I am going to wait until I have the trinket wired it to try and nail this one down. 
  • Start building a control program to replace using a VNC session to the Pi desktop. Eventual architecture will be 
    • Remote (laptop/mobile device) <-> Pi WiFi interface via bi directional packet traffic <-> Pi to Arduino <-> Arduino -> Roboclaw, servos, sensors etc...
      • Joystick or other analog process (mifi from mobile?) for motion control
    • Pi Camera -> Netcat stream or HTTP webcast -> remote receiving system (laptop/ mobile)
      • Joystick or other analog process for pan/tilt control (wifi for mobile?)

RoboClaw 15a Review:

So far I have liked this controller lineup from Orion Robotics. Prior to these units (ranging from 2x5a - 2x60a) popping up it seemed the only easily available higher amperage hobby controller that wasn't R/C based were the SaberTooth modules you see on most of the various robo building sites. Of particular interest here to me where the various modes of control (usb, serial, PWM) and encoder capability. With a decently encoded motor you can effectively turn relatively cheap DC motors into powerful stepper motor/servos with these controllers if you so desire.  While this level of precision is not something I am after with my initial rover setup it is something I want to explore down the road. Until this it looked like I would be doing a separate encoding solution through a micro controller.

It has so far survived my idiocy (see previous post for my infinite current goof) with the exception that the voltage information is reading high. I got the same problem from the circuit I had on the UNO and the RoboClaw was a part of that configuration. I suspect a resistor got partial fried or something. Unfortunately I had no readings prior to that event so I do not know if I caused it or if it was reading high (compared to my multi-meter) originally. I suspect the problem was me... just no way to know for sure short of buying another one. 

My only real complaint thus far has been the somewhat sparse documentation. Orion's site forums are not terribly extensive and there seems to be few other places with info beyond some regurgitation of the built in examples they provide and... well they are in some cases not easy to pick up. At least not for this particular noob. They consist of working examples more or less... but none of the 'why' or what the different choices mean. For those familiar with this kind of gear I am sure what is provided is more than sufficient to get your bearings. But, if you are a novice these controllers may not be the best choice. I am not a complete novice... but my basic electronics knowledge is definitely lacking. You can read up on my various learning experiences with this controller in the previous posts on this project. Hopefully they will pop up on google searches if anyone else hits similar snags. 

All in all though the units are similar priced to the Sabertooth controllers, seem to have more options and are built well. Definitely worth a look if you are in the market. 

Monday, March 09, 2015

Thoughts: Apple Spring Forward Keynote - iWatch and Macbook

So Apple did its big spring announcement and the short version is....

Pricing and final features of the Apple Watch - no real surprises, if you liked it carry on
Brand new MacBook design - It is exciting like the Ford Thunderbird remake

and the really big news, Research kit.... no seriously. But back to that in a bit, got to save the best for last after all :-)  

Apple Watch:

First lets talk price vs function:
$350 - $10,000+ pricing. Difference in function? Zip. The utilitarian sport aluminum $350 apple watch will do everything the 18kt gold Apple Watch Edition will do. The only functional difference is style and materials. The truly valuable (from a functional standpoint) difference in watch face of Sapphire (also avaialble starting at $549) seems the more worthy of the two. Gold vrs Aluminum.... that has me scratching my head. Granted I am not a watch person so I place almost no value in precious metals for the sake of precious metals. I was kind of surprised the edition really didn't have more to distinguish itself. In this I grant creedence to the folks saying Apple is to some extent going after the "Vertu" crowd. For those not familiar Vertu has made a business of selling ho humm android phones in fancy casings of precious metals and jewels. However, I would suggest it is more accurate to say that Apple is at risk of only attracting the Vertu crowd rather than who they really desire.... but who is that? Vertu is about folks that place form over function. They do not CARE the Vertu phone does nothing new only that it is more unique and 'stylish'. This is a bit different crowd than you see buying 10k watches where regardless of how valuable you view hand crafted movements and polished micro gearing etc... there is a functional difference you get in the higher end mechanical movement watch world vrs the lower end mechanical movement watch world. Really, stereotypes of silver spoon folks with inherited fortunes and silly sports/pop/lottery winners aside, folks with money generally do not get that money by being stupid with money. Hence my surprise that Apple didn't do a bit more on separating the functionality. This would seem to leave naked snob/peer pressure as what will drive folks to go for the Edition over the more plebeian offerings. On the other hand, separating the rich from the poor functionality wise would cause some development nightmares. The numbers that make the Apple ecosystem so attractive to develop for are kind of the antithesis of a elite device owned by only a select few. 

Now lets talk style vrs. function:
Like it or not, understand it or not, agree with it or not, from a social standpoint there are circles in which someone would not be caught dead wearing a 'cheap' or 'common' watch no matter how utilitarian. It would be akin to things like a clip on tie, tennis shoes under a suit, gaudy costume jewelry in inappropriate situations  etc... they are signs of not belonging, of immaturity etc... and in such cases it would be better to go with no watch at all IF social opinions of you in that peer group is something which concerns you. One can argue all day and night for or against the merits of such a circle with such values but they will not make them go away. In this world one gets the sense that cellphones have invaded those environs somewhat in spite of their everyman nature due to their indispensable nature. In this context it would seem that companies like Vertu have exploited a void in this world unfilled by smart phone makers to sheath that utility inside something more aesthetically pleasing (talking theory more than the actual success here... but they are still in business...). Yet to me (someone who generally cares not about these things) the difference between a phone and a watch is simple. One you wear for the world to see whether or not it is in use. The other they only see when in use. As a result one is a fashion item that is subject to all that goes with that. The other is not. I think Apple's bet is Vertu is gilding the lily.... or perhaps more accurately... putting lipstick on a pig when they try and turn a purely utilitarian and largely hidden item like a phone into a fashion item. By comparison, it could be argued that the Apple Watch Edition is about having a necessary fashion IQ for an item that requires it in order to be used by a VERY desirable class of customer. Translation: The iPhone is used across all social strata without stigma BECAUSE it is not generally viewed as an item of fashion. An Apple watch attempting to penetrate in the same manner as the phone across all strata of society is fighting an uphill battle if it does not make an attempt to make a version that wouldn't be viewed the same as a 'clip on tie' at a black tie gala event. Even a bad silk tie with a garish pattern and poorly tied is better than a polyester clip on in this sense. In a way if all Apple manages to do is start a legitimate conversation (heck make that argument) in this world as opposed to being outright dismissed they win. Why? Because if they can get their foot in the door on the fashion front I think the obvious utility may carry them through to true success in making a successful high end fashion luxury item. I think their chances of success are better than 50\50. 

Now lets talk value:

Should you plunk down a minimum $350 for an iPhone accessory? Most Android wear devices and the Pebble are cheaper than this and in about 1.5 years have not managed to exactly set the world on fire. Will the Apple Watch be different? My bet is yes and Android wear is actually my answer as to why. Despite its awkward ways, an Android wear watch is useful. Surprisingly so. If an Apple Watch is only as good as the best Android wear device it will be far more successful than any single Android wear device to date, and most likely all of them combined along with Pebble to boot. That is to say I think Apple clears 2 million devices (the current combined estimate of those two worlds) before 2016 hits. Probably before 2nd quarter is over to be honest. Why? Here are the keys as I see them.

  • Even with slower iOS 8 adoption Apple has a much more homogenous audience than Android and they are proven higher spenders. 
    • This all but assures higher adoption rates and better app eco system development. Since this can add to existing apps/eco systems there are already a myriad of ways for folks to be seduced into wrist wrapped silicon.  
  • Glance notifications on your wrist are better than you think if you have not yet experienced them.
  • Silent wrist notifications are better than vibrating phones on desks or even in pockets in many cases. 
    • Even if the phone in your pocket is silent, your rustling and moving to dig it out is not.
  • Glancing at your wrist has a much higher social acceptance than glancing at your phone. 
    • It is possible this will erode over time if folks start associating wrist glancing with phone glancing but if that happens it will only be because it has become common enough that someone looking at their wrist is no longer associated with looking at a watch 'first'. Trust me, Apple would be thrilled if this happened.  
  • Apple Pay biometrically authenticated on your wrist is the potential killer feature.
    • I see this as a way to finally kill the password. If this works and folks trust this feature with their bank account then I suspect this could be a link to a drastic reduction in password entry needs. 
  • Design may prove much more stable than folks seem to think. Mass discussion seems to all assume this device will be on a phone/tablet type refresh cycle and that is probably not the way it will go
    • The horsepower is on the phone. The watch just needs to be able to display stuff snappily and by all appearances it is on par with current phone level responsiveness and the last generation or two of phones really have hit the point of diminishing returns on speeding up the interface.  
The reasons to not get one? 
  • It is possible it won't take off. If the app ecosystem development does not follow\scale at a similar rate to those of iPhone and iPad it will be a good indicator of whether or not this thing has legs. 
  • First generation designs from Apple have a tendency to be rougher than they look so if you can hold off from having the latest thing I'd let this get out into the wild for a month or three and see if any ugly unforeseen design flaws raise their heads. 
  • Apple only seems disappointed in two things and combined I think these two things will drive Apple watch early adopters to eventually want to get a new version much earlier than will be the case down the road (v2-v4 at a guess). But I do not expect this to be on a similar pace as phones.
    • Battery life. Expect Apple to figure out how to drastically improve this if the watch takes off. They seem to be on the longer end of all the smart watches (other than Pebble) and the reviews will tell us if this is true or not. They have got to be betting the utility makes it worth dealing with charging. My experience with Android wear says it can make it worth dealing with.... but only just. I think long term a week is the territory a smart watch needs to get to, if not a month. 
    • Sensors. Several credible stories seem to prove out that Apple had to drastically scale back their plans on the sensor front as they proved tougher nuts to crack than expected. Those same sources are all also saying they didn't give up.
Bottom line? 

If you are tired of the smartphone hunch and on a scale of thumb warrior and read only you slot more towards the read only end then the Apple Watch (or Android Wear/Pebble) are worth your attention and quite possibly your money. If you have callouses on your thumbs from your furious texting despite the oily glossy smooth touch screen of current phones I am not sure you will get much benefit unless you are going to talk to your watch as your phone will be out most of the time anyway.

As for me, its the first time I have been interested in personally buying a watch for any reason since I got my first smart phone. Apple Sport Space Grey (or maybe Watch) with a Milanese loop please.... and yes, from the Apple page it seems the sport/Milanese loop combination is possible. And it would be cheaper than an Apple Watch with a Milanese loop. If I go for the watch it will be for the Sapphire and if I think it has a legitimate chance of being a 5 year device. Otherwise I will wait.


Macbook Revamp:

I am really excited about this design. But unfortunately I think I am more excited about what it means for the Pro updates down the road than I am for this particular device. I smell the original Macbook Air part II. Why? Looks like you can't go to 16GB of RAM on the configuration. Combine that with the current lackluster M core chip performance folks have been seeing in all the other supper thin and lights popping out and I think 'tolerable' will be the best outcome on the performance front once these hit reviewers hands (I'll be happy if they prove me wrong mind you...). For those that want silent and portable it will be enough and it should be a success. I tend to require more oomph and it looks like that is not ready for prime time with a serious chip. The Air's have more power than this guy.... quite a bit more power.

The single USB C port thing is I think going to cause more wailing and gnashing of teeth than it merits. But, that said, until the world catches up a bit to the connector less laptop method of operating it is going to be a bit painful on the early adopter front. Be very sure before buying this thing that it will fit your work flow needs.

Those misgivings aside everything else looked stellar. The keyboard I look forward to trying. The trackpad seems a good update and the 30% reduction on the screen power consumption is staggering. All metal body, no internal moving parts, solid keys, more solid track pad. If they got the design nailed this thing could be a very long lived device. Hence my real worry about the somewhat under powered nature of the CPU/RAM options. They may have sacrificed to much to get rid of the fan/vents. 

Why does this make me salivate for the MacBook Pro updates? Take the current design that gets 9 hours and add a 30% reduction in screen power needs and you probably wind up with 10 - 11 hours of battery life with no other changes. Shrink the logic board in a similar fashion and even if you still need an active cooling solution for the higher horse power it still fits in a smaller package. Bump up the battery capacity using that terracing technique at the same time you drop the logic board and display power\space usage and you may hit 15+ hours of real usage on a serious laptop. Put USB C ports on both sides of the chassis and you can now plug power in from either end YES. No more being just the width of your computer away from the nearest power source (happens to me all the time for some reason). I do give a frowny face for loosing mag-safe power connection. I'll get over if if a future Pro has USB C that can power the laptop from either side of the device.

I would like to see a fully tarted up 15" along these lines at WWDC or maybe in the fall (new display tech, new keyboard, new terraced all metal unibody with form fitting batteries, shrunken logic board even if it retains active cooling). But it is more likely it will get bumped more along the lines of the way the 13" got bumped (Flash speed bump, force trackpad). Does it get USB C though? That seems to be the way Apple thinks things are going to go. But I was kind of surprised to not see a USB C on the 13" RMBP refresh. Perhaps they are hedging their bets a bit here? Or just that they admit the professional world adoption of USB c is likely going to lag behind the more general consumer market. Who knows. 

Research kit:

Everyone is talking about the Watch and Macbook announcements. But I have seen almost nobody talk about Research kit. Folks.... that is a bonefied potential game changer in the world of medical research. And, that is a big deal. Even if they do not significantly increase participation levels and they only manage to significantly increase the amount and quality of data points per participant it will represent a significant improvement. If they successfully change the scale (they are talking potentially going from 10's - 100's to MILLIONS) at which folks participate AND increase the amount and quality of data capture it could lead to major increases in both the speed and accuracy of results from medical studies. This is a foundation they are laying and it seems to align nicely with the idea that future versions of the Apple watch will introduce major advancements in health related sensors. Future headline prediction "Many decry the supposed frivolousness and waste of insurance companies now paying for Dr. proscribed Apple watches for medical monitoring". You heard it here first.