Hi all,
With the emergence of GenAI, the world is changing for both enterprises and consumers. The use cases for enterprises are much more apparent as LLMs, for the most part, enhance almost every industry and lead to either cost savings and even newer revenue models. The part regarding consumers is less clear. What we know by now is that LLMs on the consumer front can help tremendously with tasks like information gathering and education. Still, I think what lies on the horizon today and what will become a big industry is AI agents. AI agents acting as personal assistants not only in our professional lives but also in private ones.
The path of consumer AI is agents
We have already seen real steps towards the AI agents that can help us do all kinds of tasks. Just a few days ago, Anthropic launched an upgrade to their LLM -Claude 3.5. Sonet model, which has a new capability in beta, called »computer use«. This new capability allows developers to direct Claude to use computers the way people do – by looking at a screen, typing text, and moving a cursor. This is another announcement in the AI world on top of OpenAI's o1 multi-step reasoning model, another step into creating AI agents that can do more complex tasks with a simple, more generic instruction/prompt.
The AI agents have big potential in professional use cases as they can enhance and speed up some jobs or even replace others. The big picture is also having different AI agents with varying levels of knowledge (for example, defined via parameter count) managing and giving tasks to other AI agents with less knowledge bases but that are cheaper to run. So, in a way, you have the most knowledgeable AI agent, which uses the leading edge LLM giving tasks and orchestrating smaller or older LLMs.
But the AI agentic use-cases do not end in the profesional world. Given how busy and packed schedules people have, AI agents can serve as personal assistants in our private lives. Even if we leave the sci-fi scenario of having a humanoid robot do your dishes and take out your trash, even basic functions like helping you organize your calendar, book holidays, order groceries, real-time cooking tips, make automatic notes and reminders of chores are tasks that most of us would love to have a personal assistant to help out. And based on the pace of LLM development that we are seeing today, most of this will become a reality in less than a year from now. This will open up the flood gates for consumer LLM adoption as most people will be able to have a personal asistant helping them out for the price of a Netflix subscription. With the »killer use-case« in AI assistants established, the question becomes what device will best deliver this experience?
Glasses vs. a smartphone
Apple and other smartphone makers are trying to ensure that they integrate AI personal assistants in smartphones, such as Siri AI, etc. in the phones. This is an interesting section from an interview with a Former Apple employee who worked on Apple's AI/ML:
»I think the way that you look at it is the same way that Siri, in the past, what they had envisioned, is that it's actually a virtual assistant rather than a search engine. An example that comes to mind, let's say, I'm looking for a car that fits a bike inside of it. I can actually just say, "I'm looking for a car with a trunk that can fit this bike in it," and it's able to draw from either the Internet or what it was trained on to say, "Okay, this bike is the size. I'm looking for a car with this size trunk," and infer a lot of that knowledge from it. I think that's the killer app moving forward and the goal of Apple along was it wants you to communicate with Siri the way you would with a friend.«
source: Library of interview from former employees - Alphasense
But the real question is, if you have a device that stays in your pocket most of the time, is this really the best device to enhance the AI agent experience? My answer is not quite.
I think the idea that many tech CEOs got recently, especially after the Meta Connect event last month was that Smart glasses might actually be something which could get a lot of attention and usage over the coming months and years, because of this AI agentic use-cases.
There are a few reasons for this thinking. The most significant benefit of the phone is definitely that it can access all of your digital information like emails, calendars, photos, etc. But for having the best assistant, you want them to also have acess to the things that you do in your non-digital life. Glasses, on the other hand, can see what you see and listen to what you hear. While this seems like the Terminator nightmare come to life for some, if you have an assistant that sees and hears what you see and hear throughout your day and also has access to your digital life (emails, calendars, etc.), this is a big life unlock for our busy days and schedules.
There is also the benefit of glasses being more immersive and having the ability to use them while your hands remain free. The last generation of Meta Ray Ban Smartglasses also proved that smartglasses can look like regular glasses and not be seen as weird. Like many of you, I have also bought my pair of Meta RayBan's, and through my experience of using them often, I have not even once had an experience where somebody would ask me and say that there is something weird with the glasses I am wearing. Most people just notice them as cool, stylish-looking Ray Bans.
It is no surprise that Meta RayBan's are the top-selling product not only in the U.S. but also in the EMEA region. Stefano Grassi, the CFO of EssilorLuxottica (the owner of Ray-Ban), said the following on their last earnings call:
»Ray-Ban Meta, very happy about the performance that we've seen. I mean, we, it's an overall success story that we see.
Just to give you an idea, it's not just a success in the US, where it's obvious. But it's also success here in Europe. Just to give you an idea, in 60% of the Ray-Ban stores in Europe, in EMEA, Ray-Ban Meta is the best-seller in those stores. So it's something that is extremely pleasing«
Diving even deeper into the numbers, here are probably the most accurate estimates you can find on how many Meta RayBan smartglasses Meta has sold so far:
Given the numbers it is safe to assume that Meta will sell close to if not more than 1 million units this year. That is a great number given that the product is only in its second generation and based on the low sellthrough that the first generation of Meta Ray Ban's achieved.
The current generation of Meta Ray Bans are usefull for listening to music, podcasts, taking pictures or recording videos, taking calls and other reasons. So, the use of Meta's AI via the glasses is not the only reason why someone would use them, but it does come in handy often. Still, it is important to understand that the glasses, when designed, did not take into account the use of LLM models, as this was before the chatGPT moment. Meta added the AI functionality later on. So the real question is, how good will the next generation be now that Meta has seen that LLMs are an excellent feature for smart glasses?
While Meta, at its, last Connect conference, unveiled the full AR glasses called Orion, which we will talk about later. Orion glasses are still a prototype, and Meta hinted that they are 3-5 years away from being a product available for consumers to buy.
However, we got a big clue about the possible next generation of Meta RayBan Smart glasses from an interview Ben Thompson did with Meta's CTO Andrew Bozworth. The bit that caught my attention most was the following:
»So I’ll tell you another demo that we’ve been playing with internally, which is taking the Orion style glasses, actually super sensing glasses, even if they have no display, but they have always-on sensors. People are aware of it, and you go through your day and what’s cool is you can query your day. So you can say, “Hey, today in our design meeting, which color did we pick for the couch?”, and it tells you, and it’s like, “Hey, I was leaving work, I saw a poster on the wall”, it’s like, “Oh yeah, there’s a family barbecue happening this weekend at 4 PM”, your day becomes queryable. And then it’s not a big leap, we haven’t done this yet, but we can, to make it agentic, where it’s like, “Hey, I see you’re driving home, don’t forget to swing by the store, you said you’re going to pick up creamer”.
source: Stratechery
There are a few reasons why I am very excited about the idea of »super sensoring smart glasses«:
The product doesn't need any big technological leaps (from the current generation of Meta RayBan smart glasses).
Because it doesn't need any technological leaps and complex displays, adding a few sensors to the existing Meta RayBans seems very inexpensive, which may result in a product being affordable for mainstream adoption (price tag under $500).
The timing is right. What I mean by that is that if the super-sensoring glasses were something that Meta would ship in the following 12-month time period, I would feel like the timing would be perfect. With technology innovations and new tech products, often it is not just about the product, but also the market timing has to be right; the consumers have to be ready for a product like this, and with the push of AI agents from smartphone makers and the traction of the current Meta Ray Ban generation, the timing feels right for mainstream adoption.
In addition to all the above, both LLM providers and smartphone makers like Apple and Samsung are pushing more voice navigation for LLMs on phones. In my view, this can be a double-edged sword for smartphone makers. On the one hand, it improves the user experience when using an LLM. On the other hand, the friction point of having to use voice to communicate with your smartglasses LLM mostly becomes smaller, as you are already used to it from your smartphone. This is an additional bonus point for smart glasses adoption to take off.
Interestingly, according to studies, 60% of the population in developed countries wears glasses. The Vision Council (a trade organization for the optical industry) reports that 75% of American adults use some form of vision correction, with glasses being the most common method. It seems like a small lift for these consumers to upgrade their » regular« glasses to smart glasses.
With all that being said, I still see the smartphone as a necessity for many years, even with consumers that have smart glasses as it is not always convenient to use voice for navigating the glasses LLM, in some cases, text will still be the preferred method and having your phone with a dedicated app for prompting and communication will continue to be important.
Sponsor
In my research process I often use the AlphaSense library of over 300M documents from over 10.000 sources. I particularly focus on expert interview with Former employees, big customers and competitors. A lot of really valuable insights can be found there. In partnership with them you can use this link to get a 14-day free trial. With the free trial you get access to their whole library of content, and can even download expert interviews and other documents.
Highly recommend you try it out:
Full AR Glasses – Orion
Moving to the more distant future. At Meta's Connect event, Meta unveiled its prototype for full AR smartglasses called Orion.
Orion aims to integrate holographic overlays with real-world experiences, providing functionality similar to smartphones without needing a physical screen.
Key features include:
Holographic displays: Users can interact with 2D and 3D content overlaid on the real world.
Neural interface: A wristband detects neuromotor signals, allowing users to control the glasses without physical gestures.
AI integration: Meta’s AI system enables hands-free experiences, including live video calling, messaging, and augmented reality gaming.
The glasses showed that a futuristic future may not be as far away as we think. To be honest, though, Orion is still a prototype and is 3-5 years away from being a product that consumers can buy. It also come with a lot of limitations:
1. Price: The Information reported that it costs around $10.000 just to build one pair of Orion smartglasses.
2. Form factor: weight and thickness. While Meta achieved something very hard in the industry, getting the weight under 100 grams (98 grams), there is still room for improvement as regular glasses typically weigh in the 20-40 gram range. Orion's frames are also thicker than regular glasses.
3. Puck & wristband: The thing about Orion is that it doesn't have any wires, but it does come with an external puck and a wristband.
The puck is a computing puck that handles the processing and connectivity functions that the glasses need. The wristband serves as a neural interface from which it can read your subtle hand and finger gestures so that you can navigate it more easily.
Most of the limitations seem temporary, as we got some more answers from Thompson's interview with Meta CTO Boz. Boz said they already have their next several prototypes of full AR glasses and are THINNER and DRAMATICALLY CHEAPER.
Adding on the expense/price point they further elaborated in a discussion where Thompson asked/hinted in the interview »Just got to make it cost a thousand bucks.« Boz responded with, »And we have a path to it.«
I am not worried about the costs point for AR smart glasses because all breakthrough technology is costly to build at the start when you don't have mass volume and supply chains are not formed yet. However, through history, we have learned that once those supply chains start forming, the costs drop drastically quickly. Meta needs to target the goal of having the full AR smart glasses be in the $1k-$2k price range. And with Boz already at this stage, confirming that they have a path to the device costing $1k is very encouraging.
The thickness and the weight will be a harder ask in my view as it is a technological problem, but it is too early to assess where the limitation here will be. Then, we have the limitation of using an external puck and wristband. This limitation will stay at least for the short to mid-time frame, as AR & LLM usage is very compute-heavy, and fitting it into a small glasses frame with all the cooling seems like a challenging problem. The neural wristband on the other hand is something that enhances the user experience, Meta even went so far and said that at some point you won't have to even gesture the move with your fingers but that the wristband will pick up your intent just from you thinking about it, which is something really cool. Meta has been working on this »mind-reading« wristband since it acquired the startup that initially developed it called CTRL-Labs in 2019. The deal was rumored to be between $500M and $1B. Reading the experiences of many who tried the technology, including many former Meta employees, they describe it as “magical.” So, the wristband can be a limitation but can also be one of the more remarkable features of why someone would want to buy the whole smart glasses set.
But the question that no technologist can avoid when it comes to consumer technology is: but what about Apple?
Fight Apple vs Meta
This is not a secret that Apple and Meta do not like each other. There are multiple reports out there on their developing relationship. The bad relationship escalated in 2021 when Apple implemented its iOS privacy change, which led to a signal loss of advertising companies like Meta, which led to revenue loss for Meta. In a recent podcast, Zuckerberg mentioned that Meta calculated that they would be twice as profitable if it weren't for Apple and Google limiting them with their App Store rules and taxes.
All of this history has led to Meta being highly motivated to own the next computing platform. Zuckerberg has said multiple times that one of their biggest competitors is Apple, and in the future, it looks like this competition will become even more critical.
Since the smartglasses will still be in use with smartphones for some time, Apple can have some significant benefits if they enter the AR smartglasses race with Meta.
The most noticeable is that Apple can provide glasses that, instead of using an external puck, they can offload that compute to your iPhone in the same sense an Apple Watch can probably replace the wristband. This is perhaps Apple's biggest dream as it means a user must buy a set of their devices.
This is a concern that Meta knows well, as Boz mentioned:
»so if I have a concern about Apple, it’s not the competitiveness or non-competitiveness of their headsets, it’s that they’re going to bundle into their ecosystem in a way that really makes it hard for us to compete.«
source: Stratechery
For Meta, the playbook here, as I see it, is relatively straightforward. The edge they can have on Apple has to be formed from a technological perspective, with a better product and features that Apple constantly trails. On top of it, the relationship of Meta with EssilorLuxotica seems to be highly important as the form factor with glasses is an important area. Meta's brand with consumer products is not established yet; leaning on something like a RayBan can be very helpful when you go against Apple which brands is like a cult to many people.
For Apple, it also feels like this is the first moment where they will have to show significant technological innovations and advancements on a bigger scale for the first time since the post-Jobs era. This time, they are going against a highly motivated competitor who is not afraid to spend money to achieve breakthrough technology. Just looking at Meta's Reality Labs’ spend in the last 5 years, the figures speak loudly:
Meta has spend over $70B in it's Reality Labs unit (which is devoted to AR, VR and the metaverse concept) and it doesn't seem like it will stop that spend any time soon. Matching this kind of spend is a big one to swallow, even for a juggernaut of a company like Apple.
For Meta, on the other hand, it is going to be a hard battle with having to be the technology first-mover innovating for an extended period before their consumer brand is forged enough in consumer habits to allow them to step their foot off the gas. The task is far from easy, but it’s something Mark has probably wanted to have for years now: a head-on race/fight with Apple on the next computing platform.
Summary
The reason I wanted to share some findings on this topic is while it seems like this is less of a focus topic right now, given the whole technology focus currently is still on the pace of LLM development, I can't shake off the feeling that besides LLMs disrupting Search the killer use-case with a huge TAM from a consumer standpoint for LLMs and SLMs is in personal-assistants. And while they will start with your smartphone, the transition to adding smartglasses to that experience might be a strong trend for the next 5-10 years and one of the most important battles for the companies wanting to have a strong foothold in the consumer tech and platform business.
As any forward-looking investor, understanding this concept is the first step to finding opportunities and benefiting from them.
Until next time,
Disclaimer:
I own Meta (META) stock.
Nothing contained in this website and newsletter should be understood as investment or financial advice. All investment strategies and investments involve the risk of loss. Past performance does not guarantee future results. Everything written and expressed in this newsletter is only the writer's opinion and should not be considered investment advice. Before investing in anything, know your risk profile and if needed, consult a professional. Nothing on this site should ever be considered advice, research, or an invitation to buy or sell any securities.
Wearing glasses that record every interaction 24/7 is extremely creepy for anyone around