GHC Reflections: Video Games

The video games topic was definitely helpful not just from a video gaming perspective, but from future technologies and augmenting reality points of view as well. Even if you’re not an intrepid game developer, some of the points were definitely worth noting for any developers, and even interactive media/story planners. Intrigued? I was. Read on for more.

ReconstructMe (http://reconstructme.net) plus correct camera technologies (they suggested Asus Camera) was an interesting project, and was showcased specifically between Maya and Unity. The basic premise of ReconstructMe is using a camera rotating around an object to then render a life-perfect 3D model of that object: a backpack, a laptop, a tree. You could use the technology even on animals and humans – but of course you would only have them in a singular pose unless you were able to edit the model joints from the mesh (which I am uncertain of the capability for). You can then  retrieve (from a 3D technology such as Maya) and paint mesh skins for the models to use them in any 3D application (such as Unity), or even configure the models to 3D print replications (like making statues of yourself to put on trophies – for being awesome, of course). When it comes to wanting real-to-life object models, or when the model is needed quickly, ReconstructMe definitely looks like a viable option.

The next presenter focused on developing a hierarchy for critically evaluating learning games, so that they can be more widely accepted and used in STEM classrooms and their merit understood on a broad metric scale. She based her evaluation on Bloom’s Taxonomy, with criteria for Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. She would then correlate the objectives of the game and player actions to these categories – if task one in a game was to design your own character, she may check that creativity is present in task one. Examples she had of quality STEM teaching games were CodeHero (for Unity and Variables) and Spore (for predator prey interaction). It was intriguing to see someone attempt to quantify a metric for gaming and entertainment based on valuable content rather than personal preference. Something like this, if done with care and properly implemented, could easily make its way into school systems to evaluate games that could be used in the core curriculum and have value to students – an exciting prospect for getting children excited about learning in a fun and different way!

Next, we focused on developing true stories in games – striving for non-linearity. One of the largest downfalls of gaming as a story mode is that our stories often must end up linear: this interaction must occur before this event, leading up to the final boss and ending. While this linearity from a coding perspective seems near unavoidable, this topic focused on ways to branch our stories such that the linearity does not become a limitation. A key takeaway was that our stories may be linear, but our gameplay should strive to be non-linear. A suggestion was “Satellite scenes”, which are based on a player action and then dynamically modifies a tiny bit of the story, until the fragments become the linear whole. Scenes that are the quintessential backbones to the story and must exist or must be in a certain order are known as “Kernel scenes”. Therefore, more open world and progressive, non-linear gameplay lies in tying satellite scenes to shaping the world, and not overpowering the game with a progression of consecutive kernel scenes. Some terminology to remember as a takeaway also: Actors perform actions, the world changes, and these events should relate to each other – and always remember that actors should be people, not things, two or more agents who understand each other and respond properly. Put effort and focus on depth in satellite scenes and letting the player see the little changes their choices make to the world at large (strong core story with flexible relevant nodes that add to gameplay), and your game will provide depth beyond the standard linear story.

This intrigued me from the standpoint of an experiences (as a user experience lover!) – be the experience a story, video games, or an alternate reality/marketing plan, considering the ripple effect on individual users rather than the funnel to the end goal is definitely something that can add finesse and excitement to any endeavor where participation from and excitement by the audience is hoped for!

The final presenter discussed the XBox SmartGlass, which is relevant for contextual use of augmented reality and future media consumption beyond simply video games. The XBox SmartGlass is designed to turn any smart device into a controller. It accounts for devices and the media SmartGlass is being used with through simplified text entry and contextual user interface – with the hope of keeping users engaged even when away from their initial screen, or continue to keep them interacting with their secondary device and engaged while at the primary screen. Examples included Forza, where the second device would provide an aerial GPS view, or a game like God of War, where SmartGlass may provide hints, maps, weaknesses, or additional scenes and content contextual as you progress, so there is never a need to look up a game guide. Again, as a UX person, I loved the idea of contextual content and assistance or depth added for users without additional work on their part, or without distracting them if they do not wish to utilize that aspect of the experience. I would love to see more contextual work like SmartGlass appearing in other media, and hopefully as AR continues to develop, on more devices as well.

As a lover of video games, I went into this talk expecting to be happy I went even if the content was lacking (because video games!). Instead I found quite a bit of content that inspired me beyond what I anticipated, and points for innovation beyond the gaming sphere. It’s amazing how gaming has become so strongly linked to experiences and technology development in our culture, and it’s exciting to see the possible applications across other modes and mediums as we continue to develop these immersive entertainment worlds.

“Video games foster the mindset that allows creativity to grow.” – Nolan Bushnell

ReconstructMe copyright ReconstructMe Team, Spore copyright Spore, Xbox copyright Microsoft

GHC Reflections: Web and Mobile Dev

The web and mobile dev lightning talk featured tons of technologies and trends for the next generation of development.

“World of Workout” was a concept discussed for a contextual mobile RPG based in real-world fitness. It would use pattern recognition to recognize user’s workouts – sparing them the complexity of having to input their info themselves (ie holding phone in an arm workout holster and doing squats, phone can recognize this motion). The workout info would then affect the progress of the game avatar, with stats available to the avatar for workouts done by the user, such as speed boosts for sprinting, strength for weights, and stamina for distance running. Another interesting feature they proposed was accelerated improvement at the start of the game so users are encouraged to get into a daily routine, but also adding in a fatigue factor so that rewards are reduced when workouts would become excessive. There would also be random rewards and associated notifications for doing “challenge” workouts with extra benefits attached.

This idea really resonated with me as part of the “future of user experience”: what better immersion is there than in a good game? And as we have learned, users appreciate apps responding to them and to receive gratification: which pattern recognition and rewards both do. After seeing this idea, I sketched out the concept for a similar game-incentive idea during a hackathon: TaskWarriors, an RPG based on checking things off your task list and gaining skill and gold based on the priority of the task and type of task (helping you balance your days -and- ensure you complete high priority tasks before their deadlines). I’d really like to re-explore TaskWarriors, since if done right, I think it could work very well like World of Workout seems (hopefully) fated to. It has also gotten me considering other avenues where gamification/customization and rewards could help with immersion and user experience – hopefully I can learn more and get more chances to potentially implement this in the future!

Parallax Scrolling was another feature discussed during this talk: specifically technologies with features that can aid or enhance parallax development. Javascript and CSS3 were discussed as features to aid in transitions, transforming, and opacity, while HTML5’s Canvas, WebGL, and SVG were also noted. Flash, VML, YUI scroll animation, jquery plugins such as Jarallax, and Javascripts such as pixi.js or easing effect tween.js were also featured as possible parallax technologies.

Parallax is definitely an intriguing artistic feature for making a website seem more interactive. Obviously, like any interactive feature, there’s definitely a point where it could be much too much. But there are some beautiful parallax scrolling websites that show what an awesome addition it can be to your content, especially on websites telling a story with a long scrolling page, like this one: http://jessandruss.us/

3D Graphics for web programmers was actually highly interesting to me. I’m terrible at making models (at least, at present) but have had a bit of experience with Unity, and always found 3D development interesting, even though I’m not the best at it right now. Though I would need to learn modelling to actually implement, the 3D Graphics presentation focused on three.js, a plugin that seems to make it extremely easy to program 3D elements into web pages on the website – rather than building them in Flash, Unity, or another engine. Three.js uses a what (mesh for the item and a pointlight for light source), a where (scene.PerspectiveCamera) and a how (translate, rotate, scale; requestAnimationFrame) at its most basic core to render and move 3D objects. Source code is available at http://github.com/shegeek/teapots_can_fly in which the presenter used only three.js, teapot.js (the item), and an HTML5 page to create the example.

CourseSketch was the final web and mobile technology shown, which was also really exciting from a college student perspective. It was a sketch-based learning platform being developed for MOOCs which would allow recognition of sketches to enhance automated grading capabilities of online problems. The examples given that were in development were truss diagrams for engineering, compound diagrams for chemistry, and Kanji for Japanese. Of course, with many more courses moving to online submission and grading, one can see applications for this technology well beyond the MOOC platform and into more education avenues – given of course the technology were robustly developed, and taking into account various drawing styles or other hiccups that may occur.

Overall there were a lot of intriguing development tools and concepts discussed. Obviously this talk hit home with me as World of Workout inspired the beginning conceptualization and planning for the Task Warriors app, even if it hasn’t seen fruition (yet! I hope I can continue it!). I love talks like these that bring to light new ideas and useful technologies – they have so much inspiration and energy within them that drives tech forward.

One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man” – Elbert Hubbard

Task Warriors copyright Bri, Jess and Russ copyright JessandRuss.us

GHC Reflections: Augmented Reality and Streaming Media

The Augmented Reality segment focused on getting user attention when they view the world through the lens of a device, and then providing them with relevant information – for instance, in an unfamiliar place seeing info labels pop up about the location. A problem however with labels is the contextual linking to the described object – and ensuring that the size relative to the screen size is still large enough to be helpful, without clustering too greatly and causing clutter. Conversely, solving this problem will definitely help users navigate the scenes – which of course, would be real-world scenes, by having optimal placement for aid.

Eye tracking was a highlighted topic for the Augmented Reality – and when discussing label placement, this is definitely understandable. Knowing where a user is going to look can ensure labels contextual to that appear – and decreases the amount of labels one would need to populate at a time, causing the clutter problem to all but disappear. Eye tracking methods include infrared light detecting the pupil, and heat maps of vision. The latter is good for studying eye movements, but the former could be a technology integrated into devices that could actually be utilized in real software for users.

A follow up to the idea of contextually populating based on eye
tracking does however, raise a few issues of its own. For instance, how can one ensure that label transitions after the eye moves are not too distracting?
Sudden or jerking movements would bring the users gaze back to the label, which could definitely throw off eye tracking software. “Subtle Gaze
Modulation” is the concept of using the right movement to draw the eye,
but terminating the stimuli before the gaze reaches its destination. Think of a blinking or glowing-then-dim light, drawing you toward it but disappearing before your eye lands on the spot that was radiating. Photography “tricks” like dodge and burn or blur, can heighten contrast and create the same sort of gaze-catching effect. And for anyone interested: the mathematical formula used in the presentation for gaze modulation

theta = arc cas ([v * w] / [|vector v| * |vector w|]).

Where v is the line of vision from the
focus and w is the desired line of focus to find the angle between the two.

The Streaming Media presentation was fairly standard pertaining to
connection quality versus video quality. Adaptive Streaming, or the
“Auto” default on YouTube for example, is the concept of the request
for stream quality changing relative to signal strength. The ideal of Adaptive Streaming is to ensure the user’s stream is not interrupted – quality may flux, but there should be no buffer waits and video/media should always be visible.
The encoding can also play a huge factor in video: compression reduces file
size, but at the obvious consequence of quality. The quality available for a
video to choose from when attempting adaptive streaming is dependent upon the file size – factors such as Resolution (HD/SD), bitrate, or frames per second (fps). Reducing frames per second can speed a file with potentially minimal consequences: video files contain a lot of redundancy (think of all the frames – many are repeated), and there is no way the human eye is able to see them all. Codex are compression and decompression algorithms that can minimize the impact of video file reduction to humans by taking into account these redundancies humans cannot notice anyway.

As a budding UX professional, the eye tracking points were of
intrigue to me. I would love to play with techniques similar to these in
digital designs in an attempt to help my users follow the path, without over-correcting or pushing them as they themselves adapt and explore. It would be interesting to see how this could be refined to be more subtle but assistive as needed.

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clark

All research copyright their respective owners

Veronica Mars and the Crowdsourced “Long Tail”

Alright, I’ll admit it: I’m a marshmallow and I believe in LoVe. Regardless of how much Kristen Bell can get me to fangirl about her misadventures into private eye work, the story of “The Veronica Mars Movie” is an interesting case study in marketing, specifically, marketing to the long tail.

As most familiar with marketing are already aware, the long tail is the section of the market that does not cater to the most common buyers. These are your specialty items, your underwater basket weavers or fans who need a figurine of that one character in that one episode of that one show that -one- time. The beauty of the long tail is that, while you won’t hit many buyers, those you do hit will invest almost unrivaled amounts because they really, /really/ want that product and have few places to get it.

Veronica Mars, for those who (sadly, sadly, sadly) haven’t heard of it, debuted in 2004 and featured a young Kristen Bell working on various whodunit cases using the skills she learned from her father’s Private Eye business. Cases range from the mundane (who’s rigging the school ballots?) to larger overarching cases (who killed Veronica’s best friend?). The show ended in 2007, after seeing Veronica through one year of college. The show, while never a huge success in ratings, received critical acclaim for its well-written noir presence, epic female lead, and intriguing blend of humor, wit, and drama. Much like Firefly and many other shows with a highly engaged and supportive fanbase, when Veronica Mars was cancelled, there was a huge hole that left them wanting more.

Why hello long tail, there you are. (read in Veronica’s internal monologue voice for added ‘tude)

Seven years after the cancellation, Rob Thomas appeared on KickStarter encouraging fans of the series to fund a Veronica Mars movie. The response? Astounding. Even if you weren’t a marshmallow, anyone following news of successful KickStarter’s surely heard of Veronica Mars utterly smashing almost every KickStarter record ever: Two million dollars were raised in less than ten hours, breaking the fastest grossed one million and two million dollar records. It also broke highest minimal pledging goal achieved, largest successful film project, and on its final day most backers of any KickStarter project. (Note: this has only been beaten thus far by the Reading Rainbow project – which seems to have been an obvious take from the success of this project!)

For those who think the long tail isn’t a worthwhile investment: let Mars Investigation show you it depends on the long tail you’re trying to cater to.

The film itself, nine years after the final season in terms of timeline, received overall good reviews from critics, coming out at 77% and 6.4/10 on RottenTomatoes and 61/100 on Metacritic. Tabulating box office earnings was a bit difficult for the Mars movie, considering its fund source and due to the fact that prizes for some tiers of backing were copies of or tickets to the film – meaning quite a large portion of watchers would be absorbed in the funding. Despite this, Veronica Mars still grossed $1.988 million opening weekend, coming in at #11 on the box office charts, and had a worldwide total of $3.485 million – again, not including those backer-watchers, who already ensured the costs of film production were covered with their $5.702 million given to the KickStarter. This means most if not all of the total box office revenue should ideally have been profit for the film, and if one wants to argue semantics, the true total is more like $9.187 million, if the funds the KickStarter received are counted toward the film’s grossing.

What do all of these numbers mean? Even if this film wasn’t a box office smash, it created huge ripples, and at very low cost to Warner Bros, who facilitated the Digital Distribution and ensured screening of the movie in 291 theaters across the nation.

While the movie might not have hit record-breaking numbers at the box office, it did pave the way for a new generation of entertainment catered to a willing long tail, and it certainly shocked the system as a reminder that if you build it – no matter how niche, the diehards will most certainly come in their small, but desiring droves.

“Do you not instinctively fear me? Maybe you should make yourself a note.” – Veronica Mars

Veronica Mars (c) Warner Bros. Entertainment