GHC Reflections: Front End Optimization

One of the single part workshops I attended was a discussion and exploration into front end optimization. As someone who works mostly in front-end design, this was an intriguing talk to me. It was rather technically oriented so the notes are a bit dry, but if you are stepping into this field at all, there are a few pearls you might find useful.

The first and most important note that the presenter made was to optimize your digital projects for the front end, contrary to popular belief. While it is of course important to build your systems on a strong framework and have clear channels to resources and reduce unnecessary clutter in back-end code, people often forget the impact front-end code can have to the end user. If your front-end development is sloppily thrown together, this is the layer that directly hits the user, and can easily result in a degradation of performance even if back-end code is flawlessly executed.

The next point the speaker hit on was minifying HTML, CSS, and Javascript files. The number of lines in a file counts toward the KBs needed to load the site and can slow it down. The speaker pointed out that users are unlikely to care about “pretty code” especially if it’s causing slower performance.

Minifying is a practice I’ve had trouble stepping into myself, if only because I like to “grab and go” with my code. I often hear of keeping two copies: your editing file and then uploading the minified version to the web – I just have had little reason to lately, as my own website’s pages are not incredibly line-heavy. Likely as I work more on larger projects, minifying will become more and more my practice – this speaker’s stressing of it was part of the motivation I needed to look into it more.

Next were a few basic points, like avoiding redirects and bad URLs. Not only can they be confusing and frustrating to the user, but redirects can cause the page load time to increase (as the request has to jump around more than usual), and bad URLs will likely destroy the flow of users actually using the application. Redirects like m.mysite.com for a separate mobile versus web version can also cause issues down the road: for instance, content missing from one version of the website and two sets of code to now maintain that have quite a large portion of duplicate content (which may cause issues for search engine optimization). Using responsive design can help fix this issue by allowing one set of code with varied breakpoints to function on all devices.  If you must do re-routing, try to limit it to the server side instead of client side to optimize the redirect’s speed and overhead. One last tip: if your redirects attempt to make a user download the app (such as a mobile version of a site redirecting or loading a modal saying you must visit the app store), stop what you’re doing right now. Not only is this annoying and likely to drive traffic away from your site, it’s a poor attempt at getting a hook in a user who isn’t even sure they enjoy your content and can leave a very bad first impression that might make them unlikely to come back. Furthermore, redirecting them to an app because developing your mobile site more robustly wasn’t in your plan shows a laziness to develop your site with their needs in mind.

Allowing GZip Compression was another point made, which required a little more research on my part as I hadn’t heard of it prior. GZip is a compression algorithm for websites that finds similar strings within a file and replaces them temporarily, which can make the file sizes a lot smaller – especially in documents made for the web, where phrases, tags, and whitespace are often repeatedly used. If you (like me) had never heard of GZip and would like more details, find out more here: https://developers.google.com/speed/articles/gzip

Page load times are obviously critical to the success of an application, and can often be an indicator of how optimized performance is (after external factors such as internet speed or evened out, of course). Typical metrics for average load times tend toward users losing interest in a web page if it hasn’t loaded (or at least, loaded something) within half a second. Mobile users tend to have more patience, but after about ten seconds their patience is gone – two seconds or less makes them quite happy though. This number has been one I utilize quite often now when asked “how long is too long” or doing quick load tests. It’s a simple note and numbers to remember, but ones that can really help in a pinch if you’re trying to quickly decide if more optimization of existing code is needed, or to move on to the next task or project as the code “loads reasonably”.

Applying web best practices is a key component of ensuring optimization. Not only will following best practices likely result in more efficient and optimized code, it will also typically result in cleaner code for developers to understand, and greater optimization for search engines, thus resulting in more end users.

Another practice for optimizing your user’s front end experience is to cache and consolidate your resources. Consolidation can consist of compression (such as GZip) for files and also image compression. Of course, with image resources there is always the fear of a quality trade-off with compression, but when done correctly images typically still have room for at least a bit of optimization with little to no loss in quality. If your site is image heavy, I recommend looking into image compression and load optimization – it can seem scary, especially on a portfolio site where quality is key – but the results can pay off in happier users. This is definitely something I myself need to get more comfortable with learning about and utilizing, especially as I build out my own portfolio projects and such – and so I’ll challenge you to it also.

If you’re still unsure about using compression on your images, you can at least dip your toe in the waters by ensuring you’re using the correct file types for your images. PNGs (portable network graphic) are almost always the most optimized file type for web and mobile use. GIFs (graphic interchange format) are typically best for very small images (think a Wingding style icon, at about the size of ten to twelve point font), or images containing very little color (typically three or less color points). GIF and PNG both support transparency in modern browsers (degradation for transparency can get spotty especially for PNGs in older versions of Internet Explorer. If you’re having issues in IE 7 or 8, the fix can be as simple as saving your PNGs as “Indexed” rather than “RGB” mode). GIF provides support for animation frames – meaning if you require animation in your image and cannot or do not wish to achieve the animation effect with several images and CSS (this can definitely be cumbersome), GIF is the ideal format. JPG (Joint Photographic Experts Group) is ideal for all photographic quality images. BMP (Bitmap Image File) and TIFF (Tagged Image File Format) are not ideally suited for use in web applications any longer.

Another key facet of front end optimization is ensuring you as a developer do everything in your power to combat device limitations for your users. This includes creating adaptively: load resources on user demand and customize images by screen size to ensure the fastest load time – to name a few ways. Practice progressive rendering – loading an image entirely at lower quality and progressively enhancing the image as more power to do so becomes available – helps ensure users with slow graphics cards still get the full experience, even if it starts off a bit fuzzy. JavaScript slowness can be a debilitating issue in slower CPUs; considering this and limiting your necessary JavaScript (of course, don’t betray your functionality needs!) can help every user enjoy your website easily and speedily.

The presenters finished out with a few tools that can be used to measure the performance of front-end and mobile devleopment. Webpagetest.org can be used on internal sites – which is great for entities with a large intranet presence. Pagespeed is a plugin that can be added to your page to test and gather data on load times. Mobitest is optimized for mobile speed testing, and the Chrome Remote Debugger and Safari Web Inspector allow you to plug in an Android or iOS device respectively and test for performance.

Overall a lot of great information here – some of which I was a bit leery of given my own ways and justifications for those, but could see the merit in what the speaker was suggesting and that it was, at the very least, worth considering and potentially implementing aspects of for each project as the struggle between optimizing and “getting it done” rages on. Regardless, there was plenty I learned or at least gained a stronger awareness of, and I’m very glad I attended the workshop to have my eyes opened a little bit wider.

“There are two ways of constructing software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” – C.A.R Hoare

GHC Reflections: Mobile Design & Security

This lightning panel was rather interesting, as the topics were fairly varied in point but all great to consider for mobile design and the future of data and security.

The first talk discussed a user’s “social fingerprint” – a mathematically unique sequence of how a user interacts with their mobile device on social networks, texting, calling, etc. Essentially, every user boils down to using their device in a slightly different way – when these patterns are calculated no two are exactly alike. This is an interesting concept: we often think everyone talks, texts, or checks Facebook identically – but apparently this could not be farther from the truth. Social fingerprint is more than just -how-, it is who and when: time zones, contacts frequented, and more all makeup the social fingerprint. This term is often used to describe our social usage in general, but it can be investigated deeper to create this truly unique representation of our habits.
The speaker pointed out how if our social fingerprints are indeed unique, they could be used in some capacity for security measures, such as fraud detection. Exploring secure measures beyond the password is definitely exciting territory. I worry though that social fingerprint is “too” unique – in the sense that it could consistently change. If you cut ties with someone you used to call every day, would that not raise an alarm in social fingerprint detection?Obviously social media has ways to trend anticipated life events and interactions between people based on the sheer amount of data – but can everything truly be boiled down to a mathematical signature? I’m excited by the prospect of using social fingerprints, but concerned at the actual application of them – especially if the math and inputs are as complex as they seem they may be.

Another take on security was utilizing GPS to ensure secure interactions. Specifically, the speaker discussed GPS as a means to identify “zones” in the real world that one anticipates accessing devices and the level of comfort they have that at those locations, they are indeed themselves. For instance: home and work may be level 1, where we are confident that if we are here, our device is being accessed by us. Level 2 may be the cafe or laundromat, where we would frequent, but may accidentally leave the device unattended. Level 3 could be our hometown, neighborhood, or even state: where we can be expected to be in general but could easily lose a device within. And level 4 might be anywhere else globally: access from these places would be irregular or unanticipated. The presenter discussed using these levels to give varying degrees of password/access assistance. If I’m at home and forget my password, I expect that I should be able to receive all my hints or assistance channels for logging in. On the town, I may want less options to appear, just in case someone else is on my device. And most definitely I would want heightened security to anyone attempting to access when I’m out of state/country/etc (or trying to access -from- these places), so their hints should be extremely restricted if there at all. The idea was to provide “secure spaces” to heighten security beyond just the password, but to further attempts to breach it or obtain information pertaining to it.

This topic is intriguing looking back because Microsoft has been implementing a similar feature in Outlook. While I appreciate their security at times it can be a bit too overbearing – my work’s servers ping off a cluster not near us geographically, and this triggers the “suspicious activity” login attempt any time I try to get to my email at work. The security concept is great – but something like the presenter discussed, where I have more of a choice in defining my regions, would definitely save headaches at times (like when I try to log in at work for one small thing only to have to go through a chain of security measures which the details for may be at home). Definitely interesting to see this idea being implemented, and curious where the next steps will be with it.

Another speaker in this panel discussed A/B Testing – something among many other versions of testing I’m hoping to become more familiar with in my job. They stated a strong A/B test can be made even more helpful by integrating code to retrieve data on user input or mouse movements – so patterns between sets A and B can be recognized and the user process more readily understood. Sessions and their data could be stored in buckets relative to their version and even the time/cycle or type of user for quicker retrieval and review.

The next topic was accessibility in mobile. This topic was fairly straightforward, but always refreshing to keep in mind. The presenter highly recommended considering the accelorometer – think of technologies like FitBit, and how relevantly accessible its use is beyond just software and screens. Other considerations for accessibility – touch and sound. Consider your feedback to users: a soft pulse/vibration when they press a button, a light ding when an alert appears. Remember to consider how these affordances effect the experience for users who are color-blind, deaf, etc. – are your notification color choices still visibly helpful or even viewable to someone who is color blind? Does your application give another form of feedback if a user is deaf and anticipating a ding (a glowing icon, tactile response, etc)?

The final presenter discussed flexible privacy controls. With the advancement of healthcare digital records and increasingly more sensitive information going digital, at times companies forget the affordances that could be made with physical/paper copies that need digital counterparts. The presenter used healthcare as an example: Certain health records you would like to be visible to your spouse, certain to your family, and certain to only yourself, your doctor (or only certain doctors), and so on. These preferences may also change over time: think a bank account in which a parent has access while a child is in school, but the child may need or wish to remove the parent’s access once they are grown. While these issues in the past were fixed with phone calls or paperwork, digital counterparts need flexible privacy controls to ensure users can take care of these privacy needs with the same ease (or at least, the same to less amount of headache) that they did in analog. These flexible privacy controls can even extend to securing applications themselves: if my healthcare app is linked to my phone, I may want to have additional security measures before starting the app to ensure that no one can tamper with my settings but me (and here we can even correlate to the talks before for more ways to secure our privacy!).

I loved the focus on users and their experiences interacting with their phones and how that relates to the real world in so many of these talks. They pointed out design imperatives and areas for continued development to continue to make phones and in turn technology overall an extension and addition to the “real world” – rather than purely a distraction or separate plane entirely.

“The mobile phone acts as a cursor to connect the digital and the physical” – Marissa Mayer

GHC Reflections: Web and Mobile Dev

The web and mobile dev lightning talk featured tons of technologies and trends for the next generation of development.

“World of Workout” was a concept discussed for a contextual mobile RPG based in real-world fitness. It would use pattern recognition to recognize user’s workouts – sparing them the complexity of having to input their info themselves (ie holding phone in an arm workout holster and doing squats, phone can recognize this motion). The workout info would then affect the progress of the game avatar, with stats available to the avatar for workouts done by the user, such as speed boosts for sprinting, strength for weights, and stamina for distance running. Another interesting feature they proposed was accelerated improvement at the start of the game so users are encouraged to get into a daily routine, but also adding in a fatigue factor so that rewards are reduced when workouts would become excessive. There would also be random rewards and associated notifications for doing “challenge” workouts with extra benefits attached.

This idea really resonated with me as part of the “future of user experience”: what better immersion is there than in a good game? And as we have learned, users appreciate apps responding to them and to receive gratification: which pattern recognition and rewards both do. After seeing this idea, I sketched out the concept for a similar game-incentive idea during a hackathon: TaskWarriors, an RPG based on checking things off your task list and gaining skill and gold based on the priority of the task and type of task (helping you balance your days -and- ensure you complete high priority tasks before their deadlines). I’d really like to re-explore TaskWarriors, since if done right, I think it could work very well like World of Workout seems (hopefully) fated to. It has also gotten me considering other avenues where gamification/customization and rewards could help with immersion and user experience – hopefully I can learn more and get more chances to potentially implement this in the future!

Parallax Scrolling was another feature discussed during this talk: specifically technologies with features that can aid or enhance parallax development. Javascript and CSS3 were discussed as features to aid in transitions, transforming, and opacity, while HTML5’s Canvas, WebGL, and SVG were also noted. Flash, VML, YUI scroll animation, jquery plugins such as Jarallax, and Javascripts such as pixi.js or easing effect tween.js were also featured as possible parallax technologies.

Parallax is definitely an intriguing artistic feature for making a website seem more interactive. Obviously, like any interactive feature, there’s definitely a point where it could be much too much. But there are some beautiful parallax scrolling websites that show what an awesome addition it can be to your content, especially on websites telling a story with a long scrolling page, like this one: http://jessandruss.us/

3D Graphics for web programmers was actually highly interesting to me. I’m terrible at making models (at least, at present) but have had a bit of experience with Unity, and always found 3D development interesting, even though I’m not the best at it right now. Though I would need to learn modelling to actually implement, the 3D Graphics presentation focused on three.js, a plugin that seems to make it extremely easy to program 3D elements into web pages on the website – rather than building them in Flash, Unity, or another engine. Three.js uses a what (mesh for the item and a pointlight for light source), a where (scene.PerspectiveCamera) and a how (translate, rotate, scale; requestAnimationFrame) at its most basic core to render and move 3D objects. Source code is available at http://github.com/shegeek/teapots_can_fly in which the presenter used only three.js, teapot.js (the item), and an HTML5 page to create the example.

CourseSketch was the final web and mobile technology shown, which was also really exciting from a college student perspective. It was a sketch-based learning platform being developed for MOOCs which would allow recognition of sketches to enhance automated grading capabilities of online problems. The examples given that were in development were truss diagrams for engineering, compound diagrams for chemistry, and Kanji for Japanese. Of course, with many more courses moving to online submission and grading, one can see applications for this technology well beyond the MOOC platform and into more education avenues – given of course the technology were robustly developed, and taking into account various drawing styles or other hiccups that may occur.

Overall there were a lot of intriguing development tools and concepts discussed. Obviously this talk hit home with me as World of Workout inspired the beginning conceptualization and planning for the Task Warriors app, even if it hasn’t seen fruition (yet! I hope I can continue it!). I love talks like these that bring to light new ideas and useful technologies – they have so much inspiration and energy within them that drives tech forward.

One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man” – Elbert Hubbard

Task Warriors copyright Bri, Jess and Russ copyright JessandRuss.us

GHC Reflections: Augmented Reality and Streaming Media

The Augmented Reality segment focused on getting user attention when they view the world through the lens of a device, and then providing them with relevant information – for instance, in an unfamiliar place seeing info labels pop up about the location. A problem however with labels is the contextual linking to the described object – and ensuring that the size relative to the screen size is still large enough to be helpful, without clustering too greatly and causing clutter. Conversely, solving this problem will definitely help users navigate the scenes – which of course, would be real-world scenes, by having optimal placement for aid.

Eye tracking was a highlighted topic for the Augmented Reality – and when discussing label placement, this is definitely understandable. Knowing where a user is going to look can ensure labels contextual to that appear – and decreases the amount of labels one would need to populate at a time, causing the clutter problem to all but disappear. Eye tracking methods include infrared light detecting the pupil, and heat maps of vision. The latter is good for studying eye movements, but the former could be a technology integrated into devices that could actually be utilized in real software for users.

A follow up to the idea of contextually populating based on eye
tracking does however, raise a few issues of its own. For instance, how can one ensure that label transitions after the eye moves are not too distracting?
Sudden or jerking movements would bring the users gaze back to the label, which could definitely throw off eye tracking software. “Subtle Gaze
Modulation” is the concept of using the right movement to draw the eye,
but terminating the stimuli before the gaze reaches its destination. Think of a blinking or glowing-then-dim light, drawing you toward it but disappearing before your eye lands on the spot that was radiating. Photography “tricks” like dodge and burn or blur, can heighten contrast and create the same sort of gaze-catching effect. And for anyone interested: the mathematical formula used in the presentation for gaze modulation

theta = arc cas ([v * w] / [|vector v| * |vector w|]).

Where v is the line of vision from the
focus and w is the desired line of focus to find the angle between the two.

The Streaming Media presentation was fairly standard pertaining to
connection quality versus video quality. Adaptive Streaming, or the
“Auto” default on YouTube for example, is the concept of the request
for stream quality changing relative to signal strength. The ideal of Adaptive Streaming is to ensure the user’s stream is not interrupted – quality may flux, but there should be no buffer waits and video/media should always be visible.
The encoding can also play a huge factor in video: compression reduces file
size, but at the obvious consequence of quality. The quality available for a
video to choose from when attempting adaptive streaming is dependent upon the file size – factors such as Resolution (HD/SD), bitrate, or frames per second (fps). Reducing frames per second can speed a file with potentially minimal consequences: video files contain a lot of redundancy (think of all the frames – many are repeated), and there is no way the human eye is able to see them all. Codex are compression and decompression algorithms that can minimize the impact of video file reduction to humans by taking into account these redundancies humans cannot notice anyway.

As a budding UX professional, the eye tracking points were of
intrigue to me. I would love to play with techniques similar to these in
digital designs in an attempt to help my users follow the path, without over-correcting or pushing them as they themselves adapt and explore. It would be interesting to see how this could be refined to be more subtle but assistive as needed.

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clark

All research copyright their respective owners

Google Pokemon MAPster: On Nintendo and Mobile

If you have any aspiring Pokemon masters as friends, or happened to open Google Maps up today, chances are you found out about Google’s April Fools prank this year:

Granted, there actually ARE Pokemon in Google maps today: just in sprite form and no traveling required. (Unless you count hopping from Harajuku to Old Faithful via the Maps app travel)

While a collaboration between Google, the Pokemon Company, and Nintendo was a rather ingenious prank to tug on any kid-at-heart’s nostalgia and gain some excellent publicity for all parties, what might not have been expected of the prank was the conversations it brought about to the future of Pokemon, and well – Nintendo games in general.

Nintendo franchises are some of the most beloved and memorable games: Mario, Donkey Kong, Pikachu, and Link (Legend of Zelda) easily spring to mind among others when one is asked to think of a video game. One of Nintendo’s best selling points for its games is the exclusivity of its characters: typically confined to Nintendo only titles with rare cameos to outside titles, and exclusively playable on Nintendo console systems.

Does that exclusivity exclude Nintendo from some successful business ventures? Any console junkie will tell you that when it comes to hardware, Nintendo may have innovative ideas (a controller with a screen? Some of the first motion detection titles?), but their processing power can lag years behind Sony PlayStation or Microsoft XBox. Some mobile devices may even have better processing capabilities and features than current generation Nintendo devices.

Would it be better business for Nintendo to farm out their franchise characters? Or start developing and selling for mobile? Maybe opening up a retro games section of the Play store filled with mobile formatted nostalgia-inducers?
Think of the possibilities mobile could offer: the augmented reality type game described in the Google Maps trailer isn’t so far off – granted it might have to be scaled down a bit since it’s unlikely one will hop a plane to Egypt to finish a game.
Mobile could hit a base of users Nintendo is missing too. Users who love Mario and Pikachu, but can’t bring themselves to shell out the money for a console just to play one or two titles, but would gladly pay the money for those titles on their mobile device. Or even users who would play more classic mobile games a la CandySwipe, Cut the Rope, etc. that would buy extra levels or make a micropurchase for a small game with their favorite characters starring. There’s a potential market left untapped.

Yet for all the possibilities, and all the frustrated Nintendo lovers but non-console buyers who would clamor for mobile Nintendo love, there’s some sound strategy to what Nintendo has done so far. As stated at the beginning, Nintendo built its characters partially on their exclusivity. Only seeing Mario in his Nintendo environment gives an expectation and a context, and it gives a level of quality expectation for the product. Letting Mario run around just anywhere willing to shell out the cash for him could dampen the iconic-ness of him and other Nintendo franchise.

Plus, just like Sony and Microsoft, part of Nintendo’s profits come from console sales. While PlayStation and Microsoft have plenty of great third party developers to contract out games to and are known for a vast array of different games being available to them, Nintendo again breeds its consoles in part for the exclusiveness of their franchise titles – third party developers are almost akin to just gravy. Take the franchise titles and put them anywhere, and when stacked against the competitors with better horsepower, who is going to buy the Nintendo console anymore? They may have novel hardware innovations, but given console sales for Nintendo already are less than their competitors, who can say how much more the scales would tip?

None of these conversations are to say Nintendo needs any advice. Their brands speak for themselves: the company has amassed quite spectacular revenue and while their current consoles may seem in trouble, the company itself is far from likely in the same waters. These are what ifs, and exploring the whys.

The bottom line seems to be that for all the excitement and potential new markets Nintendo could open up by expanding its horizons, it could also become a fatal blow to the company. Dwindled to a halt console sales could potentially rip open any gaming company,  and beyond that the iconic nature of Nintendo franchise characters could get lost in the mix as they jump from game to game, console to console. While it might seem backwards to those looking at the potential innovations ahead of us, Nintendo sticking to what they know may be exactly what they need to continue on their path of household gaming entity.

Plus, if the technology already exists, that means it can always become a part of the next big Nintendo thing. The 3DS already HAS augmented reality features, for example: they’ve just never been that strongly used in a franchise game to my knowledge. Maybe this Google Maps trailer is opening doors to something right in their backyard?

Regardless of what they choose to do in the future, Nintendo is a savvy company who chose to opt out of the console horsepower war and opt into developing further what was already working for them: their characters. I’m interested to see how their business plan continues to unfold, and I’m actually doing a marketing course research survey project on Nintendo and mobile devices, so you may see more blog posts about this from me.

But until then, I’m going to go back to searching for all these Pokemon in….where am I now, Kyoto? And hoping against hope if I find them all Google sends me a lovely little Pokemon master card to hang on my wall, right next to my pile of Pokemon plushies.

“Video games are bad for you? That’s what they said about rock-n-roll” – Shigeru Miyamoto

Pokemon and respective characters (c) Nintendo, Game Freak, and the Pokemon Company International; Mario, Luigi, and other characters (c) Nintendo