Shawn Frayne is an inventor, a visionary, and an entrepreneur who co-founded The Looking Glass Factory – a company that raised 20 million dollars to make his dream of hologram a reality. On December 2nd, 2020 they launched a new product – it is called Looking Glass Factory which is a 3D display intended for users of all levels, from developers and prosumers to lay audience. In this conversation, Shawn is talking about the mind-boggling technology of holograms and 3D displays, also touching upon the true-life subjects of financing and development.
The video and audio versions you can find on Youtube, Spotify, SoundCloud, Google Podcasts, Apple Podcasts, and Stitcher.
The interview was recorded in December 2020.
Satya Mallick (SM): He's an entrepreneur chasing the dream of the hologram. His company, the Looking Glass Factory, has raised 20 million dollars to make this dream a reality. Today, on December 2nd, 2020, they are launching a new product – it is called Looking Glass Factory which is a 3D display you can put on your desk. Just imagine: a 3D display that sits on your desk can be hooked to a computer or work in a standalone mode. Without further ado, let's welcome Shawn Frayne and learn about this amazing technology and the promise of 3D displays. Welcome, Shawn! It's a pleasure to have you on the show, and I’m so glad that an entrepreneur of your caliber has come to our show. This is a completely different company, a very unique company in my opinion that you run, so I’m so glad to have you on the show.
Shawn Frayne (SF): Thank you so much for having me, I’m excited to chat with you today!
SM: Great! Let's start with your background a little bit, it's very interesting because you started as an undergrad in physics, and then you decided to do these wonderful things. How does that work: you did your undergrad in physics at MIT, and now you are the head of an AR/VR company (that's the general term for people in this field). So, tell us a little bit about that journey: how did you go from being a student in physics to leading this team of an AR/VR company?
SF: Sure! We hope it's going to be AR/VR holograms soon, sort of the three legs of what comes after flat media and flat interfaces that we're using now to have this conversation. I always wanted to be an inventor, that was my dream. In pre-pandemic days, when we could still travel, when I would go through customs, I would always write under the occupation line “inventor” because that's what I was most proud of. When I went to MIT, I was looking for inventions, and I got great guidance from professors like Amy Smith and others that were great well-known inventors that had an appreciation for the magic of invention. When you get down to the simplest elements that end up being the greatest achievement – when you have something that is a miracle that looks completely simple from the outside. In physics, while you wouldn't call a lot of folks who pursue physics inventors, the foundation is there for making the inventions on the scientific foundation that of course, they draw all their ingredients from. This is going back so long ago, I haven't talked about MIT in forever!
I was looking for the hologram – I had been obsessed with the holographic display or the dream of it since I was an eight or nine-year-old kid. I saw Marty McFly from Back to the Future II getting gobbled up by that holographic shark, and I begged my parents for a way to make my own holograms because you couldn't go to the store and get a holographic display of any sort. And so, they bought me a book, they got it from Waldenbooks (rest in peace!). The book was called Holography Handbook, and a few years it sat on the shelf for a while, but some years later, when I was in high school, my dad and I built a holographic studio in my bedroom in Tampa, Florida, where I grew up, and this would let me make laser photographs – it is kind of what a traditional hologram is, the interference pattern of coherent light captured on super high-resolution glass holographic photographic slides, but it wasn't the dynamic living hologram, holographic display from the movies. So, that's part of what I went to MIT to try to find, but there everyone said that there were some great experiments, and there were great courses on holography, but no one had unlocked that dream; and there was a lot of speculation that it would be kind of fusion, it would be 20-30-40-50 years away forever, and so that was a little disheartening. I set aside that dream for a while, and it's only about six years ago that my co-founder Alex and I picked it back up with this company that we run – Looking Glass Factory.
I always wanted to be an inventor which is kind of a mishmash between a scientist, an engineer, a salesperson that are all smooshed into one. And that’s something I got the confidence to do, and I would have done it no matter what education I ended up getting.
SM: It's funny – if you look at any major technology that people are excited about, we always feel that it is 10 years away. It's been going on with autonomous driving, with other kinds of technologies, even the whole promise of virtual reality and augmented reality has been around for many years, and we keep thinking that it's just around the corner, it will take 10 years… that's the standard answer.
SF: I totally agree, but there is a moment where it flips over, and it's not ever entirely in control of an individual, or a small team, or even a big company. There has to be something that changes in the world, and some desire in people that matches with that technological moment, and we think that's happening with holographic display right now in the way that it happened with personal computing in the late 70s and early 80s, in the way it happened with the transition from radio to television in the 40s and 50s, even in the way it happened with the transition from photography to film. These transitions don't happen that often, but they do happen.
SM: Then it's almost every decade you see a big revolution. In the 1980s it was the personal computer, and then around the year 2000 it was the Internet, and then came the mobile phone revolution, and the AI revolution is underway right now. I think 2012 was that moment right in history when AI became real, with the ImageNet large-scale visual recognition challenge when it was won by a huge margin of 11 percent by Jeffrey Hinton and his team. I think that was the moment when AI made this transition. And I think what you're saying is that the moment for holographic displays has come, and it's going to be pretty big in the next decade or so because right now the solutions are not affordable.
SM: So, tell me a little bit about your journey from being a student at MIT to becoming an entrepreneur: at what point did you decide that you are not going to join a big company and you are going to start your own company? What was that moment like and did it happen in steps? What are the steps you went through to get there?
SF: I don't think I was ever going to join a big company. If anyone listening to this is a university student or going into university – the main thing that I got from university was this confidence that there isn't some secret smart person out there that has some secret ingredient that I don't know about. We're all just people struggling to figure it out. So, I always wanted to be an inventor which is kind of a mishmash between a scientist, an engineer, a salesperson that are all smooshed into one. And that’s something I got the confidence to do, and I would have done it no matter what education I ended up getting. As for holographic display specifically, so many people have tried this, so many people have tried to make that magical window into another world or that controlled field of light that a group of people can gather around without having to put on a VR/AR headset for many decades. It took a lot of confidence in me and the other folks in the team to believe that a small group of fanatics could do what all the big companies had failed to figure out, but that is what we think we've done.
SM: That's pretty much always true: the big companies do not want to invest time and energy into something that they are not very sure about, and it's always up to people like us to jump in and create that market, and of course they can come in and acquire these companies later on, that is much cheaper for them because big companies innovation is hard – they have to follow all those rules that make it difficult for their employees to innovate fast.
Before we jump into the Looking Glass Factory, your company, I want you to describe the product and please bear in mind that some of the people may not be watching this on YouTube, they might be listening to the podcast, so could you describe what the product is, especially the product that you would launch on December 2? What is the product and what is the class of products in general, and what is the specific product that you're going to launch on December 2?
SF: We were joking in the team the other day that it would be great to take out podcast ads to describe holographic displays; it's quite a challenge to describe something as visual as that in purely verbal terms, but I’ll try! So, what we do in the name of our company Looking Glass Factory: we're based in Brooklyn, New York, and I’m having this conversation with you from our hardware lab in Hong Kong, so we operate both in Brooklyn and Hong Kong. We have made this system that allows millions of points of light to be controlled in intensity, color and directionality, so we can create what a nerd like me would call a synthetic light field. Ultimately what that means is that a group of people (one, two, three, four, it doesn't matter) can gather around this new interface, and their eyes get exposed to those rays of light, and they can see super stereoscopic three-dimensional information that changes depending on where they look at that information. That's how the real world works. The screen that we're having this conversation on or that everyone would have on the phone in their pocket, those two-dimensional screens and two-dimensional interfaces shine with pixels that have two properties of intensity and color, but the real world doesn't work that way. The real world has all the specular detail, all the three-dimensionality because the rays of light that are bouncing off people, and objects, and spaces, they bounce off of those things with three properties of intensity, color, and directionality. It’s kind of a profound thing that if you could just add directionality, control directionality on a bunch of pixels, you could recreate what it would feel like to be looking at something from the real world in all of its glory without having to put on a VR/AR headset. In the most fundamental terms, that is what we've done with our class of holographic white field display. Thousands of developers have our first-generation system, and what we're really excited about is announcing today on December 2nd this second-generation system – it's called the Looking Glass Portrait and it's designed to be folks first personal holographic display to get much wider adoption than has ever been possible with the holographic display before.
SM: And just for the audience to get the visual: it is basically like a glass cube or cuboid, and that's the display you look at, let's say, an object inside it, and you move your head and it looks like you're looking at a 3D object instead of looking at a display. It changes, you see the perspective, you see this side of the object as you move your head around. Is that the correct characterization?
SF: For the first-generation system, yes, that's absolutely correct. The Looking Glass Portrait doesn't have a physical volume that contains the three-dimensional hologram, so it extends beyond the physical bounds of the device itself, and you can touch and interact with it, and you have the direct light field in front of and behind the device. In broad brush strokes, yes, that is exactly how the first-generation system looked. If you can imagine looking at a mirror, but the mirror isn't reflecting your real world, isn't reflecting all the rays of light of you and your environment, but instead could be updatable into something else three-dimensional, and as you're looking through it, it's like looking at someone across the world at a memory that you've recorded all in three dimensions. And some of that information could actually go out of the mirror, if someone reached through, you could actually touch their hand coming a few inches out of the mirror. That is what the Looking Glass Portrait is.
SM: So, it's more like a window, right, so you move your head and you're looking into a window, and you can see things move in the right way. Will it also play if the sequence is long enough? Would it almost be like a 3D video that you see without the glasses?
SF: Our displays are real-time, and all the software stack that powers them is a real-time software stack, so you can display static information of course, so you could take a photograph with some of the newer phones that have depth information in it when you snap a portrait photo for instance, or you could use a stereo camera, or other capture technologies like that, our software can convert that into a hologram for the Looking Glass Portrait, basically extracting out many dozens of perspectives and then having to display those simultaneously. Also, it can do real-time information, so you could have something that was an app that was developed in unity and be able to interact with that app in real-time because that light field all of those different perspectives of the three-dimensional scene can be updated every 60th of a second. It is at its core, a real-time system, but of course, real-time systems can also do static content.
SM: My other question was as follows: suppose you have the system, and it is getting real-time-depth data as well as a picture with depth, and it is being transmitted from some other place, maybe even from a conference like we are having; but instead of using a regular camera we are using depth cameras and the depth information is being transmitted. Would it be possible to actually interact by having a hologram-kind-of display right there?
Real-time communication through something like a holographic light field display is of course on all of our minds these days – as we're doing so many Zoom calls; so, the looking glass portrait is the closest thing to that dream.
SF: Absolutely! I mean that's the dream that I and the other folks in the team who founded the company had – even if you're in a physical physically different space from someone else if you're across the world from someone like we're across the world from each other, I could feel as if you were sitting right across the desk from me employing this combination of capture and holographic light field display. There's a couple of main ways to do capture: first, one or more depth cameras convert into a mesh and then reconvert into the original light field, and then there's also direct light field captured light field display, so that means in a jargon language that you can interpolate different views from a smaller number of cameras from, and then resurrect that original light field for all those directions and perspectives of what was on the other side, what was originally being captured and transmitted. So, those are the two approaches. Real-time communication through something like a holographic light field display is of course on all of our minds these days – as we're doing so many Zoom calls; so, the looking glass portrait is the closest thing to that dream. You can't plug the Looking Glass Portrait into Zoom right now, but we unlocked enough tools so that a lot of folks in the developer community are going to be able to build those applications when they get their Looking Glass Portrait and hopefully start to share them, and we'll promote those, and double down on those, that we think can benefit more and more people beyond those original developers. You can write out-of-the-box with this system, also do recorded video messages, so that's close to that full dream of real-time holographic communication as well. With a variety of depth cameras and what not folks can do that even without any programming knowledge, they can just with a few clicks record a holographic message and start to explore what it's going to be like very soon to do those things like holographic Zoom.
SM: You just mentioned that some cameras actually capture the light field directly instead of capturing depth, and one such camera – I don't know whether that company is still around – is the Lightroom camera. I think they launched in 2010 or 2012, I can't remember exactly, but they captured the light field directly. That's one company people can look at if they want to know about what light field photography looks like. I’m not 100 percent sure whether they are still around.
SF: They got gobbled up by Google. They did a lot of interesting light fieldwork – like Welcome to the Light Fields. Lightroom is not around as a company anymore but a lot of those ideas that they helped to pioneer on the capture side, they've made their way into a lot of different areas.
SM: I believe in MIT there are a lot of big names in that area – if I’m not wrong, professor Ramesh Raskar from MIT was also involved in camera arrays and light field photography.
SF: Yes, tons of camera array stuff is out of MIT, and a lot of other places around the world. There is a huge spike, a c-graph, last year, a huge spike in both homemade and super pro light field capture rigs. No one had an output device for them other than the at the time the very first versions of our developer kit, our first-generation system, so we're meeting where folks have depth cameras and the phones in their pocket, there are kits like the OD for doing more controlled advanced three-dimensional capture, and there are also these light field arrays that folks are building in their garages, but also in laboratories, and all of that stuff can be easily converted into holographic light field output for some of our devices.
SM: How is the technology different (or maybe it is not) from say Microsoft HoloLens, or the stuff that's going on at Magic Leap? What's the landscape of these technologies?
SF: So, ours don't require a headset, so that's the biggest distinction. A group of folks can sort of gather around their Looking Glass Portrait as if you gathered around a campfire, or a radio, you don't have to gear up, you're not being tracked, none of that stuff. And so, you're watching the three-dimensional holograms there on your desk. The stuff that Microsoft has are innately single stereoscopic systems, some of them have advances like a couple of focal planes for folks to be able to focus on, and all this stuff is super esoteric for folks who aren't necessarily in the field. At the end of the day, they're headset-based approaches: they're either VR-based systems, they close you off from the world, or – as you mentioned of HoloLens and Magic Leap – they're AR-based overlays of three-dimensional information on the real world. There's a lot of intelligence, and the HoloLens 2 is probably the most advanced of these in my opinion, it's gathering information about the real world and doing intelligent overlay of three-dimensional info onto that real-world scene, but they're still systems that you have to gear up for. So, we see actually the future of next-generation interfaces that are starting now and will really define how we communicate and create over the next 20 or 30 years: a combination of VR- and AR-based headset systems, and then also these non-headset based holographic light field displays, and those three things are the three legs of the stool are going to be the answer to what comes after flat media and ultimately let us connect with one another, and connect with three-dimensional information what have you in a way that is much more real, like interacting with real people, or interacting with an object in the real world.
SM: With a lot of these head-mounted displays, it's not possible to work with them or wear them for long periods. Is it the same with your product? I mean do the eyes get fatigued or not so much?
SF: We've been very careful in the design of our systems to not cause any eye fatigue, and we have thousands of first-generation units out there, and one can see how folks in the community have commented on using them, and there's just no mention of that, at all. A core principle in our development is that a holographic light field display has to be as comfortable to use as two-dimensional displays, and I guess if you use a two-dimensional display, if you're working, after ten zoom calls I don't really want to look at my 2D display anymore. Same feeling with our holographic light field displays. And we think that's necessary for something that becomes just a core part of our daily life and hopefully, in ten years we're not even having a conversation about the holographic display being this new technology, it'll be just obvious that if you're doing something in 3D at the time – communicating, creating something, creating a part and even a 3D part – that you'll just be expected to use a holographic display like Looking Glass and others. Just like I’m not talking with you on a Color Computer, how I’m going to do it? So that's how it's going to be for a holographic light field display in five or ten years we think.
SM: That's very interesting! Now can you explain a little bit in almost layman's language how the technology works behind it? Just the technical integrity of it. By now you have said that it's displaying the light field, and a light field is essentially the intensity, the color, and the direction associated with that object. The pixel is not really a pixel, but it has a few things associated with it.
SF: If you think about looking at an image on your phone or a laptop screen, that's presenting a single perspective of the world to you, and that's why it's flat. And then if you go one level up – if you use a VR/AR headset, you're presented with two views or two perspectives or two images of the world simultaneously and that means one person can see something three-dimensional. Then what we do is we present dozens – actually between 45 to 100 – of different perspectives of a three-dimensional scene simultaneously, so each perspective is projected out into space every degree or so, and it fans out all of the different perspectives of a dynamic, or static, or interactive three-dimensional scene, and as you look around, your eyes are getting washed in different stereoscopic views of that scene. And the third one is how the real world works as well, and that's why when you look at something through a holographic field display like glass – it feels like you're looking at a real thing because it's representing synthetic content or captured real content in the same way that the stuff is represented in the real world.
SM: It's a hologram, right, so it's the material property of the display that is making sure you are presented with those views correctly: there is no tracking of the eyes or the tracking of the face going on, is that correct?
SF: Yes, we make some custom optics – there's a lens array and some other stuff for filtering and whatnot, some other optical stuff on the systems – so that lets us control where certain rays of light are going into the world, and we can then turn on and off the rays of light in software, and that ends up letting us create this synthetic field that can be dynamically controlled, that's produced from the display. But ultimately the physical device itself is some custom optics and electronics that allow for the control of where certain rays of light are going out into the real world.
SM: Amazing! As you know, a lot of listeners here are also developers, so the solution that you have come up with, is for developers? How can developers use what you are going to launch on December 2nd for their own applications, what are the kinds of developers for which this system is intended for?
SF: It is intended both for folks who create content – videographers, photographers, artists, etc., and for developers, so the system that has been designed to be very simple to start dipping your toes into creating your own holograms for this holographic display, which a lot of folks will start by taking a portrait photo which has depth information, and then that can be transformed into holograms for the looking glass portrait. But then we have a number of plugins so that developers can really do whatever they can imagine with the system; we have a great unity and appeal plan that's available for free for folks who are developers on those platforms, and for folks who want to go one level deeper, if someone's designing something special – OpenGL or whatever.
SM: So, that’s great that it is not only for developers but people who have no experience with programming can also enjoy this product, it is for consumers. If somebody buys this product, where do they get the content for this product from? What kind of users would use this product?
SF: We think of it as sort of prosumers – so folks who either are professional consumers or producer consumers, folks who are making some content on their own, especially if they touch 3D land in some way. So, there are a lot of folks who are 3D designers using things like Blender, or Maya, or ZBrush, folks who maybe know what depth cameras are – like the Azure Kinect, folks who bought one of the new iPhone12 Pros because it has lidar, and they were curious about doing scanning with it, so all of these folks who touch 3D land in some way. We estimate there are tens of millions of folks like this right now, and they can immediately start to create their own holograms for the Looking Glass Portrait, so this is a primarily user-generated content system. An easy example is, as I mentioned: if you take a single portrait mode photograph with a lot of the newer phones that portrait mode photograph captures depth, it produces there's a depth map, that's hiding behind the color photograph, and our software takes that depth map and then extracts out a number of different perspectives. We can do advanced techniques, e.g. infill some of the information that's missing from that single point of view capture, so the result is a really high-quality holographic representation of whatever you photographed with that portrait-mode photo. It's still one step away from the holographic display being something in everybody's home, so this is definitely for folks who are on the leading edge, both on the prosumer and developer side of things, but we've made it easy enough to use that you absolutely don't need to be a programmer to start to use this system. But if you are, you can push the system to its limits.
SM: Right now, iPhone 12 is probably very big news for you, you probably were presently surprised by it. So, somebody takes a picture with an iPhone 12 – is there a format in which that picture can be extracted so that it can be directly viewed using your new generation two device where you get all the depth and everything and it looks like a hologram. So, is that possible now?
SF: That’s directly supported in the software suite that we're releasing along with the Looking Glass Portrait, so the format is the photograph itself. You just get that photograph which has between two to six or maybe it's between two to eight images in some cases hidden behind the primary image that folks see when they take a portrait-mode photograph with some of these newer phones. We can use those hidden images, some of which contain depth, to then generate a hologram from them, so there's no special export format or anything folks need, they just get the photo, process it with our software onto their Looking Glass Portrait, and then it is transformed into a hologram. And this works with phones going all the way back to the iPhone 7. The newer phones of course have better and better depth, and there are some Android phones. All the iPhones going back to iPhone 7 Plus have this for things like lidar scans: if folks are capturing a point cloud in a format like PLY, we have direct importers for those point cloud captures as well for the Looking Glass Portrait.
SM: Interesting! But the images that come out of the iPhones could be jpeg images. Are you saying that the metadata, maybe in Exif, is storing the information about the depth also that enables you to use those pictures directly onto your device?
SF: If folks are curious, if anyone is sort of hacking around with this stuff and curious to see for themselves, you can send yourself a portrait photo and then use Exif tool or something like that, and you'll find these magical images that are hidden behind that original photograph.
SM: With the Exif tool, for example, how are these images encoded, how do I extract this image out of the original jpeg file?
SF: I know a lot of folks are curious about this because it's almost unbelievable that this information is there, so we're going to post a bunch of tutorials for folks to be able to explore this. We'll share the link or something at some point, but folks can find it from our website as well which is lookingglassfactory.com, and we'll post simple tutorials and tools that allow folks to do this easily.
SM: I remember when I was in grad school, people used to use TIFF a lot, and TIFF has this thumbnail thing, and while we wanted to obscure the faces of people for some privacy reasons, and that was being done on the original image, but the thumbnail still had a low-resolution version of the person which was full. So, you could sort of reconstruct the original picture from this thumbnail which was embedded inside the TIFF. This is scary! All the different kinds of things that are embedded in some of these pictures that we don't know about.
SF: Right! A lot of the new photos taken with the iPhone 12, for instance, have a format that is kind of similar to a video. So, all of those different frames are almost like a video, and that's how they get higher compression levels and things of that sort, so that allows a reasonably small file to contain this additional information – it's not only depth information, so there's a depth map, but sometimes if the phone ended up detecting that it's a person's face, it'll have the outline of where their mouth is, and their eyes, and all sorts of crazy stuff like that.
SM: That's news to me! I have not looked at these things in a while, so I’m actually surprised, that's fascinating.
SM: So, from now let's come to the developers. Suppose I'm a developer, let's say I know Python or maybe even C Plus; what is the minimum requirement for a developer to be able to do something with the device? Can they be just a Python programmer and get started with this thing? Can they create something with it? Or do they need some other language to develop for this device?
SF: If someone's a Unity developer and they're doing something in C Plus, or JavaScript, or what have you, then they'll be able to get started in a few minutes. And if somebody wants to do something at a different level, they can likely find the tools that they need through this HoloPlay Core SDK, that we've made available, and there's a post that's on our site that describes all of the advantages that someone can get there. Someone just posted in our community a cool hack the other day where he got an iPad or an iPhone directly controlling his first-generation Looking Glass, and he was using a combination of several different tools, several different programming languages in order to unlock that. So, in a lot of ways folks can view the Looking Glass Portrait as an output device if they're programming, if they're creating a new application, or tying into a certain 3D rendering context, and we've tried to make as many tools as possible available to the developer community, so they can do almost anything that they desire with the system. There may be some things that we've missed but so far, we've tried to provide support, even niche support, wherever we can.
SM: Great! And you already have a developer community for your gen 1 device, so I’m guessing that you'll provide the same level of support for generation 2 as well.
SF: Well, even more support. As we get more units out there, that lets us, a small company (we're a few dozen people), it allows us to give even more support to the folks who are building those next-generation applications on top of this platform. Of course, there's stuff that folks can do right out of the box: all the things that we've been talking about with photographs and holographic messages, video messages, and what have you, but there are hundreds or thousands of additional holographic applications or apps that we imagine folks could build on top of this platform. And so more units out there let us give more support, not less.
SM: How does it work? Do you plug it into your computer? Is there a demand on the operating system – which operating systems it works with?
SF: It's a great question! This is the first system ever not only just that we're producing, but the first holographic display system of any sort that operates with two modes, so there's the sort of standard desktop mode, so someone plugs in the left portrait over HDMI and USB-C to their PC or Mac, and then can treat it almost like a holographic second monitor, and they can develop on it in that way, make a build of an application and then send that to someone else, and they can run it on their own machines. There's also a standalone mode; the second mode is standalone because there's a built-in computer – we're actually using a Raspberry Pi 4 on the inside of each of these systems, and through some clever things that the team has done, we're able to run 60 frames per second holographic media fully standalone just by plugging it into wall power. That's a big advantage meaning that you can store holographic photos, videos, clips of characters, etc., all those different types of holographic media that you could create as a user, that someone sends you, that you load onto your system, and you can run those without any resource drain on your Mac. But then when you want to do development, or if you want to run something more computationally intensive, then there's this immediate way to plug it in over HDMI and USB-C to your computer.
SM: So, can it work completely detached from your computer?
SF: Absolutely!
SM: That's great because then it opens up this whole space for grandparents to be looking at pictures of their grandkids! They don't need to know anything. In fact, people who are listening to this and have parents and also kids could buy this, load up all these images, and even do a slideshow – just give it to their parents and it would be a very nice 3D live show of their kids! It's amazing.
SF: It's one step away from the grandparents, and some grandparents are super hackers, so I don't want to make generalizations, but my dad, for instance, my dad shouldn't buy one of these himself, but I could get it for him or my sister could get it for him, and we could load it up with holographic media and then send it to him, and he plugs it in, and he will be amazed to see a hologram with his family and his grandkids. I'm going to be buying systems myself to do that for my family, and I bet a lot of folks who are exploring 3D in some way are going to do that for their family and friends too, at least that's what we hope. We want folks to get exposed to the real use cases that holographic displays can do already.
People are hundreds or thousands of additional holographic applications or apps that we imagine folks could build on top of this platform. And so more units out there let us give more support, not less.
SM: Yes! Regarding your comment about hackers who are grandparents: I was so surprised we have done a few competitions through OpenCV, and also not through OpenCV, and I’m always surprised that in the top three I would find a person who is above 60. Initially, the first time it happened I was surprised because I interviewed that person and I was expecting a cool kid in his 20s or something, and I was shocked that this person was above 60! There are multiple people of this age, and they just do it for the love of it. It's so pleasing to see, it's so inspiring to see people at that age and that they want to build stuff. And then I stopped being surprised but I'm still inspired.
SF: There are some Looking Glass Clubs and things like that, and we see these folks of all ages, all backgrounds, and they're hacking on the hologram together and sharing all sorts of different knowledge. One thing that we're really excited about with this launch is that there's a lot of folks in different silos of 3D land like stereo photographers, depth videographers, unity developers, yada yada, but they can all come together and share information, and share ideas with each other under the great big tents of the hologram, and that's what we're aiming to do in a lot of ways with this new product line.
SM: In the Kickstarter campaign, what will be the retail price of the product, and what will be the Kickstarter price?
SF: The retail price is already radical. I know I sound like a sales guy right now, but we're really excited about this: every holographic display including our own up until now has been many thousands or tens of thousands of dollars; this is the first system that will retail at 349 dollars for the first 48 hours of this Kickstarter on December 2nd and 3rd. Folks will be able to get it for 199 dollars, and so we hope it's a lot of folks’ first foray into holographic display because of everything it can do, it's the most advanced system out there but also priced at a level where hopefully some folks can get a system and start to explore it without needing to get their bosses approval, or their family members approval.
SM: I’m actually planning to get one for myself as well. It's one of those cool toys you do not know what to expect and experience without having it in your hand. It's one of those things. You were a little bit apologetic before saying that this is a sales pitch, and I’d like to say that an entrepreneur who cannot sell is a hobbyist, so there should be no apology. When entrepreneurs build something nice, they think that it is good for people who buy it, they should sell without apology – that's what I tell people as well.
SF: Great! As I mentioned at the beginning of the conversation, my dream has always been to be an inventor, and in a lot of ways, I think an inventor is a combination of a scientist, an engineer, and a salesperson. Hopefully, other folks out there who are listening, if there's anyone who's thinking about doing it, they imbibe your advice on that as well because I think a lot of times there is this shyness about selling something that you're proud of.
SM: I have this personal story: one of my friends pushed me to do courses. I told him I’m doing this blog because I enjoy it. I'm not going to ask people for money for a course, and he said that therefore I will never create a course for them. You cannot support a course unless you ask for money for that. And it was so uncomfortable for me! The very first thing we created was called Computer Vision for Faces, two or three years back, and it was so uncomfortable for me, it was even more uncomfortable when we did a pre-sale. The course was not ready. I clearly mentioned to people that the course was not ready, but we would be able to create this course if they paid us, and we ended up raising about 63,000 dollars. It was not even a Kickstarter campaign, it was on our website. And then we got 37,000 dollars more before the course launched, so we got a hundred thousand dollars.
People can all come together and share information, and share ideas with each other under the great big tents of the hologram, and that's what we're aiming to do in a lot of ways with this new product line.
SF: That's awesome!
SM: It was the first thing. Then I realized that people are ready to back you if they believe that you're going to produce a good product for them, and people want to pay you. A lot of people bought the course and just kept it aside saying that I bought the course because I benefited so much from your blog. I was just blown away by that comment! I actually met one of my early backers in San Diego, he was visiting from Italy, and I thought that he was just looking for some free consulting advice or something, but he said that he bought my course and he would like to meet me. I agreed, and so we met: we talked for about two hours, we had coffee together, and never once he mentioned any work-related stuff, we talked about everything else other than work. And he said that he just wanted to meet me because he's been following the blog and things like that. So, I was really grateful for that experience. Nowadays it becomes very difficult, the schedule is tight, but at that time I could meet people.
SM: We are hitting the one-hour mark already, so I wanted to ask you what your inspiration was behind creating this company, and how you went about funding it: did you bootstrap it in the beginning? Did you raise VC money? I have been in both boats – I have created the first company, it was a VC-backed company, but after that, all the companies that I have created have not raised any VC money. That doesn't mean that I will never raise VC money but I have my very concrete philosophy about that. So, I want to hear about your opinion of how you started and when you decided to raise capital. Are there other kinds of capital that you raised which was not venture-capital-based?
SF: I’ve done this both ways. So, Looking Glass Factory, our company, is a VC-backed company. We do make money on what we sell, so I like to think that the VC money is augmenting – what we can get by delivering products to folks that we think a lot of them love. But I’ve done it the other way as well when I was just an independent inventor or working with a small team, and we would license the technology, so I’ve licensed intellectual property before and helps the big companies that we license to bring that to market, and in other cases would just produce a new product off savings entirely and see how that sticks. So, for Looking Glass Factory, which is now a little over six years old, we started through bootstrapping, and it was because nobody at the time believed that holographic display was possible.
SM: Six years back it was a different landscape.
SF: At the time everything was all AR/VR. Oculus had come out and they were about to get sold to Facebook, and there was chatter about this company called Magic Leap, stuff like that was happening. So, there was this big brain drain from the pursuit of the dream of the headset-free holographic display which had been sort of the main sci-fi dream of what would come after two-dimensional media and two-dimensional displays, but everyone was going at the time to headset basically, so nobody would really listen to a crazy guy like me talking about holographic display. We needed to prove it in a few steps, so we made a static version of our display, we called them volumetric prints, and the team brought this new technology to 3D printing at the time. We thought it would be cool if you had a way to print unprintable things. And so, we made that technology, and we sold thousands of those volumetric prints, and we made a decent margin on each of those. That fueled the next step which was an interactive very basic 3D LED cube, the lowest-resolution volumetric display or holographic display you can imagine. But then we sold around a million dollars’ worth of those products which we could make in small batches of 100 or 200 units each. Then we were able to secure some angel investment money and then secured our first institutional VC investment off basically proving that we were scrappy enough to sell products with margins built-in, use that to fuel additional R&D. Also, we had prototypes of what the next generation system could look like, so I actually remember pitching to somebody who ended up investing in our company. I applaud these folks, these investors who ended up having the vision for what was to come based on these pretty bad-looking prototypes that we had.
That prototype demo with the idea that one day you could have your kids’ holographic memories inside combined with the fact that we were already selling some product, even though it was only a fraction of the dream that we're aiming towards, – that ended up getting folks over the believability hump that holographic display could possibly come to pass, and that our little company might be able to beat out the giants to do it.
My partner and co-founder Alex made a recording app for the first-generation Kinect camera and I recorded my kids running around the house and then showed that in this little tiny prototype system that a guy in our lab Alvin had figured out the key to making right before we needed to do this pitch. And that prototype demo with the idea that one day you could have your kids’ holographic memories inside combined with the fact that we were already selling some product, even though it was only a fraction of the dream that we're aiming towards, – that ended up getting folks over the believability hump that holographic display could possibly come to pass, and that our little company might be able to beat out the giants to do it.
SM: That's great! You took money to scale not to build the product: you had some working prototypes when you approached the VCs, you had done a lot of groundwork already and sold some products.
SF: It fueled a good amount of R&D. We always use all of our profits for additional R&D, so this lets us accelerate. We've raised around 20 million dollars in VC capital at this point, and that has let us push the limits of what's possible with a holographic display. A lot of folks told me when we started on this adventure six years ago that it would take hundreds of millions, maybe a billion dollars, to end up doing this. However, that's not usually how these things go: the amazons, and microsofts, and xeroxes of the world are a lot scrappier than I think folks remember, and they end up raising just enough to get that product to market. I think that certainly keeps us hungry for landing the product, and we're not cruising on the fat of the VC money by any means.
SM: Right! People may not realize what you have accomplished with 20 million in VC funding. Look at companies like Magic Leap: initial vision was not a head-mounted display; they started with the vision of having a holographic kind of display that people could experience in the real world without having to have any kind of head-mounted display. They raised more than a billion dollars in VC capital, as well as from Google and other places. In the end, they settled for a head-mounted display – I think they really settled for it, they did not go for this kind of display that you guys came up with. So, kudos to you! It’s quite an accomplishment technologically to bring this kind of product to market. As for the vision, six years back things were very different! Kudos to you for this journey and for bringing this product to life!
SF: Thank you very much! We feel a great responsibility to deliver fully on this dream in a way that benefits people. We're focused on doing that, and we're excited that the community is hopefully going to grow a lot with this new launch of the Looking Glass Portrait.
SM: Great. We will share all the links to the blog post that we talked about, and also the Kickstarter campaign in the show notes. If you're watching this on YouTube, it would be in the description, if you're watching it on one of the blog platforms, look at the show notes, we will have all the information there. Any other messages for our users where they can reach you?
SF: Yes, they can go to look.glass and find documentation there; there's also all of this information about this new product, the Looking Glass Portrait, on the site that'll redirect you to the Kickstarter. We're excited to hear what folks want to do with the systems and what they resonate with. There's already a lot that you can do out of the box, but I think there's going to be even more that the community develops on top of this. Folks can reach me directly by writing to me at smf@lookingglassfactory.com, and I am happy to chat with people who are interested in getting into the hologram game.
SM: All the best for your Kickstarter campaign, and we will see when this goes to a billion-dollar! I have great hopes for your company, it can do wonders!
SF: Thanks a lot, I appreciate you having me on.