Blog

Giygas vs Psychology of Design: Part 1 – Fear

Obviously, spoilers ahead if you have not completed the game Earthbound/Mother 2.
Proceed with Caution – You have been warned!

Finishing the final battle of Earthbound, I felt a number of things. I spent the next day pouring over the Internet to read ideas, theories, interviews, thoughts on that final fight. I had chills during it. It’s been stated Earthbound’s final fight is one of those crazy boss battles you have to experience to believe, and so I sealed myself off from as many spoilers as I could and I’m thankful I did.

What is it about Earthbound’s final fight that makes it so memorable, so elusive, so disconcerting? I want to discuss here some of my thoughts as a usability designer as a series of aspects I noticed, and what I assessed and recognized as I read theories on the topic – hoping to make a bit more sense at what made this experience so memorable and what pearls a UX developer might be able to glean from aspects of it.
Also I will say this: as we go forward in this analysis series, I work to make it a bit depthier and create more correlations between the past ideas, so I have to build a strong foundation – even if it might seem obvious . But let’s start from the top and dredge through the Deep Darkness, shall we?

Fear

One thing I recognized was the way in which the game represented fear. Fear is of course, a strong motivating emotion, evoking fight or flight – drive over the obstacle or run as far as possible from it. What’s intriguing about Earthbound’s representation of fear is that they do so by non-traditional means, which I believe makes it so effective. Giygas could easily have been represented as a myriad of things and evoked terror, but representing Giygas as a thing would make him tangible, defeatable, humane.

The representation of Giygas as vastness, as an abstraction of a concept so far removed from a physical body, gives a sense of looming dread. Porky even echos this thought:

If you were to ever see Giygas, you’d be so petrified with fear, you’d never be able to run away! That’s how scary it is…..so are you terrified? I’m terrified too….I must be experiencing absolute terror.”

This representation does more than provide fear however. The level of fear, the representation of an evil that has no face or body, this provides gravitas. Thematic elements such as music and colors assist in setting mood in any scenario, but coupling these with Giygas’s lack of form creates the sense of a true threat against the universe. There is no man, there is no place to point a finger. Just emptiness, abstraction, the purest feeling of dread and destruction and desolation as a concept.

Many video games give us villains. Few games create a villain from the conceptualization of the deepest root of villainy. And in that, Giygas becomes more than just one final boss fight – he becomes a quintessential version of the boss fight, he is beyond the boss fight. He represents the manifestation of the idea of the boss fight, the meta of meta.

In any game we have a boss driven by hatred, destruction, anger, greed – a myriad of negative emotions comprising their psyche, yet still comprising a form potentially capable of other traits and feelings. Giygas’s representation is not as this form, but as the glimmer in its eye, that seed of what drives evil in its purest, unconstrained form – and this is what makes him all the more fearful. Giygas is the mind of villainy devoid of reason and body – a core abstract idea that due to lacking a for, allows us to interpret it as players in whatever way our minds can decipher, making him an abstract image of our own mental constructs, our own minds – a terrifying concept for every player, indeed.

“You’ve traveled very far from home…do you remember how your long and winding journey began…?” – Mr.Saturn, Earthbound

EarthBound copyright Shigesato Itoi, Nintendo, HAL Laboratory and Ape Inc.

GHC Reflections: Front End Optimization

One of the single part workshops I attended was a discussion and exploration into front end optimization. As someone who works mostly in front-end design, this was an intriguing talk to me. It was rather technically oriented so the notes are a bit dry, but if you are stepping into this field at all, there are a few pearls you might find useful.

The first and most important note that the presenter made was to optimize your digital projects for the front end, contrary to popular belief. While it is of course important to build your systems on a strong framework and have clear channels to resources and reduce unnecessary clutter in back-end code, people often forget the impact front-end code can have to the end user. If your front-end development is sloppily thrown together, this is the layer that directly hits the user, and can easily result in a degradation of performance even if back-end code is flawlessly executed.

The next point the speaker hit on was minifying HTML, CSS, and Javascript files. The number of lines in a file counts toward the KBs needed to load the site and can slow it down. The speaker pointed out that users are unlikely to care about “pretty code” especially if it’s causing slower performance.

Minifying is a practice I’ve had trouble stepping into myself, if only because I like to “grab and go” with my code. I often hear of keeping two copies: your editing file and then uploading the minified version to the web – I just have had little reason to lately, as my own website’s pages are not incredibly line-heavy. Likely as I work more on larger projects, minifying will become more and more my practice – this speaker’s stressing of it was part of the motivation I needed to look into it more.

Next were a few basic points, like avoiding redirects and bad URLs. Not only can they be confusing and frustrating to the user, but redirects can cause the page load time to increase (as the request has to jump around more than usual), and bad URLs will likely destroy the flow of users actually using the application. Redirects like m.mysite.com for a separate mobile versus web version can also cause issues down the road: for instance, content missing from one version of the website and two sets of code to now maintain that have quite a large portion of duplicate content (which may cause issues for search engine optimization). Using responsive design can help fix this issue by allowing one set of code with varied breakpoints to function on all devices.  If you must do re-routing, try to limit it to the server side instead of client side to optimize the redirect’s speed and overhead. One last tip: if your redirects attempt to make a user download the app (such as a mobile version of a site redirecting or loading a modal saying you must visit the app store), stop what you’re doing right now. Not only is this annoying and likely to drive traffic away from your site, it’s a poor attempt at getting a hook in a user who isn’t even sure they enjoy your content and can leave a very bad first impression that might make them unlikely to come back. Furthermore, redirecting them to an app because developing your mobile site more robustly wasn’t in your plan shows a laziness to develop your site with their needs in mind.

Allowing GZip Compression was another point made, which required a little more research on my part as I hadn’t heard of it prior. GZip is a compression algorithm for websites that finds similar strings within a file and replaces them temporarily, which can make the file sizes a lot smaller – especially in documents made for the web, where phrases, tags, and whitespace are often repeatedly used. If you (like me) had never heard of GZip and would like more details, find out more here: https://developers.google.com/speed/articles/gzip

Page load times are obviously critical to the success of an application, and can often be an indicator of how optimized performance is (after external factors such as internet speed or evened out, of course). Typical metrics for average load times tend toward users losing interest in a web page if it hasn’t loaded (or at least, loaded something) within half a second. Mobile users tend to have more patience, but after about ten seconds their patience is gone – two seconds or less makes them quite happy though. This number has been one I utilize quite often now when asked “how long is too long” or doing quick load tests. It’s a simple note and numbers to remember, but ones that can really help in a pinch if you’re trying to quickly decide if more optimization of existing code is needed, or to move on to the next task or project as the code “loads reasonably”.

Applying web best practices is a key component of ensuring optimization. Not only will following best practices likely result in more efficient and optimized code, it will also typically result in cleaner code for developers to understand, and greater optimization for search engines, thus resulting in more end users.

Another practice for optimizing your user’s front end experience is to cache and consolidate your resources. Consolidation can consist of compression (such as GZip) for files and also image compression. Of course, with image resources there is always the fear of a quality trade-off with compression, but when done correctly images typically still have room for at least a bit of optimization with little to no loss in quality. If your site is image heavy, I recommend looking into image compression and load optimization – it can seem scary, especially on a portfolio site where quality is key – but the results can pay off in happier users. This is definitely something I myself need to get more comfortable with learning about and utilizing, especially as I build out my own portfolio projects and such – and so I’ll challenge you to it also.

If you’re still unsure about using compression on your images, you can at least dip your toe in the waters by ensuring you’re using the correct file types for your images. PNGs (portable network graphic) are almost always the most optimized file type for web and mobile use. GIFs (graphic interchange format) are typically best for very small images (think a Wingding style icon, at about the size of ten to twelve point font), or images containing very little color (typically three or less color points). GIF and PNG both support transparency in modern browsers (degradation for transparency can get spotty especially for PNGs in older versions of Internet Explorer. If you’re having issues in IE 7 or 8, the fix can be as simple as saving your PNGs as “Indexed” rather than “RGB” mode). GIF provides support for animation frames – meaning if you require animation in your image and cannot or do not wish to achieve the animation effect with several images and CSS (this can definitely be cumbersome), GIF is the ideal format. JPG (Joint Photographic Experts Group) is ideal for all photographic quality images. BMP (Bitmap Image File) and TIFF (Tagged Image File Format) are not ideally suited for use in web applications any longer.

Another key facet of front end optimization is ensuring you as a developer do everything in your power to combat device limitations for your users. This includes creating adaptively: load resources on user demand and customize images by screen size to ensure the fastest load time – to name a few ways. Practice progressive rendering – loading an image entirely at lower quality and progressively enhancing the image as more power to do so becomes available – helps ensure users with slow graphics cards still get the full experience, even if it starts off a bit fuzzy. JavaScript slowness can be a debilitating issue in slower CPUs; considering this and limiting your necessary JavaScript (of course, don’t betray your functionality needs!) can help every user enjoy your website easily and speedily.

The presenters finished out with a few tools that can be used to measure the performance of front-end and mobile devleopment. Webpagetest.org can be used on internal sites – which is great for entities with a large intranet presence. Pagespeed is a plugin that can be added to your page to test and gather data on load times. Mobitest is optimized for mobile speed testing, and the Chrome Remote Debugger and Safari Web Inspector allow you to plug in an Android or iOS device respectively and test for performance.

Overall a lot of great information here – some of which I was a bit leery of given my own ways and justifications for those, but could see the merit in what the speaker was suggesting and that it was, at the very least, worth considering and potentially implementing aspects of for each project as the struggle between optimizing and “getting it done” rages on. Regardless, there was plenty I learned or at least gained a stronger awareness of, and I’m very glad I attended the workshop to have my eyes opened a little bit wider.

“There are two ways of constructing software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” – C.A.R Hoare

GHC Reflections: Mobile Design & Security

This lightning panel was rather interesting, as the topics were fairly varied in point but all great to consider for mobile design and the future of data and security.

The first talk discussed a user’s “social fingerprint” – a mathematically unique sequence of how a user interacts with their mobile device on social networks, texting, calling, etc. Essentially, every user boils down to using their device in a slightly different way – when these patterns are calculated no two are exactly alike. This is an interesting concept: we often think everyone talks, texts, or checks Facebook identically – but apparently this could not be farther from the truth. Social fingerprint is more than just -how-, it is who and when: time zones, contacts frequented, and more all makeup the social fingerprint. This term is often used to describe our social usage in general, but it can be investigated deeper to create this truly unique representation of our habits.
The speaker pointed out how if our social fingerprints are indeed unique, they could be used in some capacity for security measures, such as fraud detection. Exploring secure measures beyond the password is definitely exciting territory. I worry though that social fingerprint is “too” unique – in the sense that it could consistently change. If you cut ties with someone you used to call every day, would that not raise an alarm in social fingerprint detection?Obviously social media has ways to trend anticipated life events and interactions between people based on the sheer amount of data – but can everything truly be boiled down to a mathematical signature? I’m excited by the prospect of using social fingerprints, but concerned at the actual application of them – especially if the math and inputs are as complex as they seem they may be.

Another take on security was utilizing GPS to ensure secure interactions. Specifically, the speaker discussed GPS as a means to identify “zones” in the real world that one anticipates accessing devices and the level of comfort they have that at those locations, they are indeed themselves. For instance: home and work may be level 1, where we are confident that if we are here, our device is being accessed by us. Level 2 may be the cafe or laundromat, where we would frequent, but may accidentally leave the device unattended. Level 3 could be our hometown, neighborhood, or even state: where we can be expected to be in general but could easily lose a device within. And level 4 might be anywhere else globally: access from these places would be irregular or unanticipated. The presenter discussed using these levels to give varying degrees of password/access assistance. If I’m at home and forget my password, I expect that I should be able to receive all my hints or assistance channels for logging in. On the town, I may want less options to appear, just in case someone else is on my device. And most definitely I would want heightened security to anyone attempting to access when I’m out of state/country/etc (or trying to access -from- these places), so their hints should be extremely restricted if there at all. The idea was to provide “secure spaces” to heighten security beyond just the password, but to further attempts to breach it or obtain information pertaining to it.

This topic is intriguing looking back because Microsoft has been implementing a similar feature in Outlook. While I appreciate their security at times it can be a bit too overbearing – my work’s servers ping off a cluster not near us geographically, and this triggers the “suspicious activity” login attempt any time I try to get to my email at work. The security concept is great – but something like the presenter discussed, where I have more of a choice in defining my regions, would definitely save headaches at times (like when I try to log in at work for one small thing only to have to go through a chain of security measures which the details for may be at home). Definitely interesting to see this idea being implemented, and curious where the next steps will be with it.

Another speaker in this panel discussed A/B Testing – something among many other versions of testing I’m hoping to become more familiar with in my job. They stated a strong A/B test can be made even more helpful by integrating code to retrieve data on user input or mouse movements – so patterns between sets A and B can be recognized and the user process more readily understood. Sessions and their data could be stored in buckets relative to their version and even the time/cycle or type of user for quicker retrieval and review.

The next topic was accessibility in mobile. This topic was fairly straightforward, but always refreshing to keep in mind. The presenter highly recommended considering the accelorometer – think of technologies like FitBit, and how relevantly accessible its use is beyond just software and screens. Other considerations for accessibility – touch and sound. Consider your feedback to users: a soft pulse/vibration when they press a button, a light ding when an alert appears. Remember to consider how these affordances effect the experience for users who are color-blind, deaf, etc. – are your notification color choices still visibly helpful or even viewable to someone who is color blind? Does your application give another form of feedback if a user is deaf and anticipating a ding (a glowing icon, tactile response, etc)?

The final presenter discussed flexible privacy controls. With the advancement of healthcare digital records and increasingly more sensitive information going digital, at times companies forget the affordances that could be made with physical/paper copies that need digital counterparts. The presenter used healthcare as an example: Certain health records you would like to be visible to your spouse, certain to your family, and certain to only yourself, your doctor (or only certain doctors), and so on. These preferences may also change over time: think a bank account in which a parent has access while a child is in school, but the child may need or wish to remove the parent’s access once they are grown. While these issues in the past were fixed with phone calls or paperwork, digital counterparts need flexible privacy controls to ensure users can take care of these privacy needs with the same ease (or at least, the same to less amount of headache) that they did in analog. These flexible privacy controls can even extend to securing applications themselves: if my healthcare app is linked to my phone, I may want to have additional security measures before starting the app to ensure that no one can tamper with my settings but me (and here we can even correlate to the talks before for more ways to secure our privacy!).

I loved the focus on users and their experiences interacting with their phones and how that relates to the real world in so many of these talks. They pointed out design imperatives and areas for continued development to continue to make phones and in turn technology overall an extension and addition to the “real world” – rather than purely a distraction or separate plane entirely.

“The mobile phone acts as a cursor to connect the digital and the physical” – Marissa Mayer

GHC Reflections: Video Games

The video games topic was definitely helpful not just from a video gaming perspective, but from future technologies and augmenting reality points of view as well. Even if you’re not an intrepid game developer, some of the points were definitely worth noting for any developers, and even interactive media/story planners. Intrigued? I was. Read on for more.

ReconstructMe (http://reconstructme.net) plus correct camera technologies (they suggested Asus Camera) was an interesting project, and was showcased specifically between Maya and Unity. The basic premise of ReconstructMe is using a camera rotating around an object to then render a life-perfect 3D model of that object: a backpack, a laptop, a tree. You could use the technology even on animals and humans – but of course you would only have them in a singular pose unless you were able to edit the model joints from the mesh (which I am uncertain of the capability for). You can then  retrieve (from a 3D technology such as Maya) and paint mesh skins for the models to use them in any 3D application (such as Unity), or even configure the models to 3D print replications (like making statues of yourself to put on trophies – for being awesome, of course). When it comes to wanting real-to-life object models, or when the model is needed quickly, ReconstructMe definitely looks like a viable option.

The next presenter focused on developing a hierarchy for critically evaluating learning games, so that they can be more widely accepted and used in STEM classrooms and their merit understood on a broad metric scale. She based her evaluation on Bloom’s Taxonomy, with criteria for Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. She would then correlate the objectives of the game and player actions to these categories – if task one in a game was to design your own character, she may check that creativity is present in task one. Examples she had of quality STEM teaching games were CodeHero (for Unity and Variables) and Spore (for predator prey interaction). It was intriguing to see someone attempt to quantify a metric for gaming and entertainment based on valuable content rather than personal preference. Something like this, if done with care and properly implemented, could easily make its way into school systems to evaluate games that could be used in the core curriculum and have value to students – an exciting prospect for getting children excited about learning in a fun and different way!

Next, we focused on developing true stories in games – striving for non-linearity. One of the largest downfalls of gaming as a story mode is that our stories often must end up linear: this interaction must occur before this event, leading up to the final boss and ending. While this linearity from a coding perspective seems near unavoidable, this topic focused on ways to branch our stories such that the linearity does not become a limitation. A key takeaway was that our stories may be linear, but our gameplay should strive to be non-linear. A suggestion was “Satellite scenes”, which are based on a player action and then dynamically modifies a tiny bit of the story, until the fragments become the linear whole. Scenes that are the quintessential backbones to the story and must exist or must be in a certain order are known as “Kernel scenes”. Therefore, more open world and progressive, non-linear gameplay lies in tying satellite scenes to shaping the world, and not overpowering the game with a progression of consecutive kernel scenes. Some terminology to remember as a takeaway also: Actors perform actions, the world changes, and these events should relate to each other – and always remember that actors should be people, not things, two or more agents who understand each other and respond properly. Put effort and focus on depth in satellite scenes and letting the player see the little changes their choices make to the world at large (strong core story with flexible relevant nodes that add to gameplay), and your game will provide depth beyond the standard linear story.

This intrigued me from the standpoint of an experiences (as a user experience lover!) – be the experience a story, video games, or an alternate reality/marketing plan, considering the ripple effect on individual users rather than the funnel to the end goal is definitely something that can add finesse and excitement to any endeavor where participation from and excitement by the audience is hoped for!

The final presenter discussed the XBox SmartGlass, which is relevant for contextual use of augmented reality and future media consumption beyond simply video games. The XBox SmartGlass is designed to turn any smart device into a controller. It accounts for devices and the media SmartGlass is being used with through simplified text entry and contextual user interface – with the hope of keeping users engaged even when away from their initial screen, or continue to keep them interacting with their secondary device and engaged while at the primary screen. Examples included Forza, where the second device would provide an aerial GPS view, or a game like God of War, where SmartGlass may provide hints, maps, weaknesses, or additional scenes and content contextual as you progress, so there is never a need to look up a game guide. Again, as a UX person, I loved the idea of contextual content and assistance or depth added for users without additional work on their part, or without distracting them if they do not wish to utilize that aspect of the experience. I would love to see more contextual work like SmartGlass appearing in other media, and hopefully as AR continues to develop, on more devices as well.

As a lover of video games, I went into this talk expecting to be happy I went even if the content was lacking (because video games!). Instead I found quite a bit of content that inspired me beyond what I anticipated, and points for innovation beyond the gaming sphere. It’s amazing how gaming has become so strongly linked to experiences and technology development in our culture, and it’s exciting to see the possible applications across other modes and mediums as we continue to develop these immersive entertainment worlds.

“Video games foster the mindset that allows creativity to grow.” – Nolan Bushnell

ReconstructMe copyright ReconstructMe Team, Spore copyright Spore, Xbox copyright Microsoft

GHC Reflections: Web and Mobile Dev

The web and mobile dev lightning talk featured tons of technologies and trends for the next generation of development.

“World of Workout” was a concept discussed for a contextual mobile RPG based in real-world fitness. It would use pattern recognition to recognize user’s workouts – sparing them the complexity of having to input their info themselves (ie holding phone in an arm workout holster and doing squats, phone can recognize this motion). The workout info would then affect the progress of the game avatar, with stats available to the avatar for workouts done by the user, such as speed boosts for sprinting, strength for weights, and stamina for distance running. Another interesting feature they proposed was accelerated improvement at the start of the game so users are encouraged to get into a daily routine, but also adding in a fatigue factor so that rewards are reduced when workouts would become excessive. There would also be random rewards and associated notifications for doing “challenge” workouts with extra benefits attached.

This idea really resonated with me as part of the “future of user experience”: what better immersion is there than in a good game? And as we have learned, users appreciate apps responding to them and to receive gratification: which pattern recognition and rewards both do. After seeing this idea, I sketched out the concept for a similar game-incentive idea during a hackathon: TaskWarriors, an RPG based on checking things off your task list and gaining skill and gold based on the priority of the task and type of task (helping you balance your days -and- ensure you complete high priority tasks before their deadlines). I’d really like to re-explore TaskWarriors, since if done right, I think it could work very well like World of Workout seems (hopefully) fated to. It has also gotten me considering other avenues where gamification/customization and rewards could help with immersion and user experience – hopefully I can learn more and get more chances to potentially implement this in the future!

Parallax Scrolling was another feature discussed during this talk: specifically technologies with features that can aid or enhance parallax development. Javascript and CSS3 were discussed as features to aid in transitions, transforming, and opacity, while HTML5’s Canvas, WebGL, and SVG were also noted. Flash, VML, YUI scroll animation, jquery plugins such as Jarallax, and Javascripts such as pixi.js or easing effect tween.js were also featured as possible parallax technologies.

Parallax is definitely an intriguing artistic feature for making a website seem more interactive. Obviously, like any interactive feature, there’s definitely a point where it could be much too much. But there are some beautiful parallax scrolling websites that show what an awesome addition it can be to your content, especially on websites telling a story with a long scrolling page, like this one: http://jessandruss.us/

3D Graphics for web programmers was actually highly interesting to me. I’m terrible at making models (at least, at present) but have had a bit of experience with Unity, and always found 3D development interesting, even though I’m not the best at it right now. Though I would need to learn modelling to actually implement, the 3D Graphics presentation focused on three.js, a plugin that seems to make it extremely easy to program 3D elements into web pages on the website – rather than building them in Flash, Unity, or another engine. Three.js uses a what (mesh for the item and a pointlight for light source), a where (scene.PerspectiveCamera) and a how (translate, rotate, scale; requestAnimationFrame) at its most basic core to render and move 3D objects. Source code is available at http://github.com/shegeek/teapots_can_fly in which the presenter used only three.js, teapot.js (the item), and an HTML5 page to create the example.

CourseSketch was the final web and mobile technology shown, which was also really exciting from a college student perspective. It was a sketch-based learning platform being developed for MOOCs which would allow recognition of sketches to enhance automated grading capabilities of online problems. The examples given that were in development were truss diagrams for engineering, compound diagrams for chemistry, and Kanji for Japanese. Of course, with many more courses moving to online submission and grading, one can see applications for this technology well beyond the MOOC platform and into more education avenues – given of course the technology were robustly developed, and taking into account various drawing styles or other hiccups that may occur.

Overall there were a lot of intriguing development tools and concepts discussed. Obviously this talk hit home with me as World of Workout inspired the beginning conceptualization and planning for the Task Warriors app, even if it hasn’t seen fruition (yet! I hope I can continue it!). I love talks like these that bring to light new ideas and useful technologies – they have so much inspiration and energy within them that drives tech forward.

One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man” – Elbert Hubbard

Task Warriors copyright Bri, Jess and Russ copyright JessandRuss.us