SIGGRAPH Appy Hour Q&A — Yosun Chang
1 Can you share a bit about your background? How did you come to this point?
I love making things and often come up with ideas that are too detailed or complicated to explain until I build them. So I learned everything I needed to build the things I wanted to build.
Like the Mad Hatter, I usually come up with a thousand ideas before breakfast. An entrepreneurial-technologist at heart, I narrow the list down by firsthand utility-insight driven market potential — and making sure the concept utilizes enough next-gen technology to seem like magic, but is still possible and something I can build, myself, right now! In the late 90s, when I first started as a prolific professional, the tools I used was Flash and Visual Studio — these days, it’s various computer vision solutions/AR SDK’s + Unity (and your serverside of choice; mine: LAMP).
Between now and then, in the pursuit of building stuff, I became something of a generalist with too many specializations, from being an Autodesk 3D Studio Max Professional to picking up OpenCV to Unity C# to CoreML and dozens of other languages and frameworks (and my own) to being too good at hacking together minimum viable products to winning all the big hackathon out there. (And also dabbled with a MD-PHD that turned into a triple major in Physics-Philosophy-Bioengineering and then a BS-MS in Material Physics — and then a complete escape from Academia to founding and becoming a Shakespearean theatre director in a virtual world!)
My first mobile AR app was a zBrush-inspired mess I called ClayAR in 2010, built using Unity 3.5 x Qualcomm Augmented Reality (now Vuforia) — the goal was to use multitouch and smartphone features like the gyroscope along with computer vision AR tracking on a marker to make organic-forms 3D modeling more intuitive, but I soon added way too many features to the point where I was forgetting what I’d already done — it turned horrifically into Maya.
I realized a few years ago that modern app development boils down to the prompt: how do you make complex software elegant. That’s the thought theme guiding the app platforms I build these days.
2 Two of your apps were included in SIGGRAPH 2018’s Appy Hour. What can they do, and what about them excites you?
When you make too much stuff, the hardest thing tends to be selection of what to show to not overwhelm. I ended up deciding on the two app platforms to submit based on the SIGGRAPH 2018’s connected things theme and differing extremes of audience types, from indie to enterprise; and, also on areas in AR that weren’t as well explored in yet.*
- AR Interfaces for IoT showed working examples of AR interfaces on IoT objects that you can control. It seemed like a good fit for enterprise use, and for people looking for very practical uses of AR. (Overleaf. Pdf.)
- WallText is actually several different apps I built involving augmented reality forms of text messaging. (Overleaf Paper here. Pdf here.)
Wall Secret lets you post secrets on walls and other surfaces in the real world. It’s PostSecret Graffiti meets Instagram meets AR. It’s a bit whimsical — I felt it would appeal to both visual and more wordy storytellers.
ArtformAR lets you annotate on specific parts of an artwork, that can later be discovered and further-annotated by other discussion participants.
Faked.cam lets you augment fake chat screenshots on your friend’s phone.
* A segment of AR apps that I easily ruled out were projective 3D model ones — it’s the most common thing that you’d see in the field, and I wanted to present forms of augmented reality that hasn’t been seen as much yet.
3 What inspired both of these apps?
AR Interfaces for IoT was an idea I had in the bathtub at Sydney Harbour Hilton after giving a talk at JSConf Down Under 2012 right after a 16 hour flight from SF. I really wanted to reach over to dim the lights — but wouldn’t it be great if the year were 2032 instead and my bionic eyes could load an AR interface for me to air-slide the dimmer to just the right brightness — precision device-free action at a distance? It took a few years for machine vision to catch up to the point where a hacker like me could just hack a prototype out to get customers.
Wall Secret was an experiment with SDF fonts in augmented reality that turned out to look really good, and even better with LUT filters. I then turned that tech discovery into something I could use to fulfil my whimsical dreams of posting secret messages anywhere in the world for others (or myself) to find later.
ArtformAR grew out of my own version of lonely traveler, visiting art museums in new cities I visit — and wishing to discuss the art, but no one around me wanted to talk. What if it’s possible to hold discussions across time on the fabric of a painting?
Faked.cam was an idea I joked about with a friend who tried to help HoloYummy get its first paying customer. Wouldn’t it be silly if something like that became a killer app?
4 How long did both of these projects take from inception to SIGGRAPH?
AR Interfaces is comprised of a lot of different things I’ve built over the past few years. I wouldn’t call it 2018 new, but it did fit the conference’s IoT theme.
I have a lot of ideas. They haunt me if I don’t make them. So, I have to build fast.
WallSecret was one of those app in a day; website in a week things.
ArtformAR was a weekend app for me before visiting the National Gallery of Art for the first time in 2017.
Faked.cam is still a work in progress, but so far it’s been an hour hack on the plane from SF to Vancouver.
5. It’s just the beginning of the New Year. How do you foresee mobile AR moving forward in 2019?
AR used to be a cool “I’m gonna steal your phone for that” magic trick to show off, but I see mobile AR becoming common enough that most people would just expect it on their phones and in the everyday apps they use. I see AR, and by extension machine learning computer vision, as a feature that an existing app may include, if it makes sense. The technology has been around for a long time, and it’s time that the tool gets properly used.
6. What’s next for you in the realms of mobile and/or AR?
I’m experimenting with methods to let artists create augmented reality filters and experiences, without needing to learn additional tools other than what they already know.
It’s inspired by an old hack I made in 2012 that lets you color in a coloring book to texture your 3D model. I started thinking about different utility forms based on an isomorphic tech stack. What if instead of being able to color the paper, you could also fold it into an origami creature. So, PlayGAMI was inspired through a collaboration with a 3D artist and an origami guru for World Makerfaire NYC in September.
This then became a bigger picture project:
Project sur.faced.io — What if you could draw anything on any medium — paper, Photoshop, Bamboo Slate, whatever — and instantly turn it into an AR filter? Come beta test!
“A modified version of this article originally appeared on the ACM SIGGRAPH Blog”