Podchaser Logo
Home
S3 E7: Sensemaking AI - 3: the Future of AI

S3 E7: Sensemaking AI - 3: the Future of AI

Released Thursday, 21st March 2024
Good episode? Give it some love!
S3 E7: Sensemaking AI - 3: the Future of AI

S3 E7: Sensemaking AI - 3: the Future of AI

S3 E7: Sensemaking AI - 3: the Future of AI

S3 E7: Sensemaking AI - 3: the Future of AI

Thursday, 21st March 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Monica H. Kang: My curiosity with AI has continued, and I think one thing that has continued to be on my mind is are we perhaps over worrying about things and maybe under worrying about things that we should be worrying more about? And what are the ways that we can actually enjoy utilizing the good part of AI and celebrate and actually learn how to upskill? Fortunately, we have some guests today who's able to shed some perspective and light, and maybe some encouragement on how we can use AI, especially thinking about the future and today in a more meaningful, impactful way. Two guests who's been in AI and technology for quite some time and is excited about the world's attention today.

0:45

Alexander Fred-Ojala: Meet Alexander fried Oyala in Sweden Stockholm.

0:49

Monica H. Kang: And Stephanie Wong in San Francisco, Silicon Valley. Alexander used to be also in San Francisco and moved back to Sweden, so he's very familiar with both sceneries, but the opportunity I had by having both of them was just really having an honest conversation. How do we make sense of AI and what do we do? So let's get a chance to meet them. Our first guest is Alexander, who is a global expert on AI data and blockchain applications. He has been the research director of UC Berkeley's AI and data Lab, and a cofounder and technical director at the Berkeley Blockchain accelerator, where alumni companies have raid 500 million U. S. Dollars from 2016 to 2020. He is also the founder and CEO of Prisley, an emerging technology advisory firm, and the co founder and CEO of Master Exchange, a trading platform for music rights.

1:47

Monica H. Kang: He is an AI blockchain venture partner at the Growth Investment fund nine yard equity in Stockholm, and he has been awarded the Amazon Alexa Innovation Fellowship in 2018 and 2019. As one of the ten faculty members globally. He has a successful track record of consulting fortune 500 companies, running startups, and hosting executive educational programs for business leaders all over the world. And when I say all over the world, I mean it, because he's about to share what's going on globally and why this hype and interest in AI is truly a global phenomenon. As he looks back at his multi experience building companies, supporting companies with AI and technology, he's really excited about where AI is going today. So with further ado, let's go to Sweden and meet Alexander.

2:43

Monica H. Kang: Very excited to have Alexander here all the way at Stockholm, Sweden. Thank you so much for tuning in today. I guess the first question is, I think you would be one of my only guests for the AI series coming from Europe and actually understand the global lens big picture. Is AI only really an exciting trend going on in the States or is it happening globally as well? Like what's really going on? Because you travel a lot, you've been based in Europe and in the States. Tell us a little bit more of the big picture of the AI trend right now.

3:11

Alexander Fred-Ojala: Globally, no, but I would say that artificial intelligence is definitely a trend that is global. So just like you have a startup scene maybe in the Bay Area, you have it in silicon beach, down in Los Angeles, you have startup scenes in New York and maybe even in DC, I'm not too familiar, but all over the US, the same thing is in Europe, in Asia, everywhere I go, companies, industries, academia are trying to grapple with this new technology. And there is a lot of hype, there's a lot of buz. But it's something that can be felt almost wherever you go, at least if you go into tech circles.

3:52

Monica H. Kang: Are there things that you're worried about as well as excited about where AI is heading in general, because you've been in this space for a while?

4:00

Alexander Fred-Ojala: One reflection that I've had quite a lot over the past years is that it is so difficult to project or predict exponential outcomes. So what I mean by that is that this technology has been rapidly evolving over the years. Now, even when I was in conversations with some of the best AI professors in the world when I worked at UC Berkeley, they would say that AI is never going to be able to solve the game of go or the board game of go. They didn't say never, but maybe they said that's going to take 30 years, 40 years, 50 years. And then Google subsidiary DeepMind, actually based here in Europe, in London, they were able to do it the year after.

4:43

Alexander Fred-Ojala: And if you only take the progress with chat GPT and how that revolution sort of took the world by storm, then I have done both research and a lot of applied work with natural language processing over the years and back in 20, 15, 20, 16, 20, 17, there were a lot of hype around chat bots that companies were going to be able to answer customers support tickets automatically on their website with a chat interface. Those systems were extremely difficult to build. It would take six months. It was a lot of data collection of support tickets that had been answered, labeling systems where you would have to manually put in how specific things should be answered.

5:30

Alexander Fred-Ojala: Now, it's very easy to build extremely sophisticated customer support systems with these large language models and foundation models, where one example is chat GPT or GPT four, that is behind that model. So the progress has been a lot quicker than what I thought. And what I also know that many of the global experts in this area, they could have never predicted these advancements that we have made that creates both excitement for everything that we can do that is good with the technology, but it also comes with a lot of risks. So only how easy it is today to spread misinformation or to create deep fakes, basically to trick people with generative content at scale because it's not costly to produce that content anymore.

6:21

Alexander Fred-Ojala: And also that you can clone someone's voice, you can have an AI agent sort of speak in the way with the same terminology as these people. There was a news article that came out only a couple of weeks back where a CFO of a Hong Kong company transferred $25 million to an account. And he did that because he had jumped into a meeting with other people from his management team. There was a Zoom meeting with several other team members, but all of them were deepfakes and they instructed him to do the transfer of 25 million. Those kinds of things are already happening today. You only need 7 seconds of someone's voice in order to be able to clone it.

7:07

Monica H. Kang: Wow. Alexander Fred-Ojala: And only a couple of pictures of someone. If I showed my angle here and here, now you can almost do a perfect deep fake of me giving an interview in this setting.

7:19

Monica H. Kang: Well, I can tell our guests we do have the real person here today on the show, in case you're wondering if I use technology to borrow him joining far? I guess I need to cross check and go to Sweden to make sure I'm talking to the real person. Thank you for bringing us into reality and both the risk and the opportunity. I guess I'm curious if we fast forward to your point about how exponential this kind of speed of growth has been. I mean, even just the awareness. I feel like it wasn't until Chad GBT was around the corner for, of course, AI experts around the corner, they've been like, seeing, as you have pointed out, the changes and transformations where it could head, but now it's publicly more aware, which has caused all of that.

8:01

Monica H. Kang: How much more exponential is it going to be? What can we expect? Let's say maybe even next year or in the next 510 years?

8:08

Alexander Fred-Ojala: If I were to place a bet, then I would say that this progress is not going to slow down. It's only going to pick up speed, in its essence, what artificial intelligence is. If you break down the word, then what we're creating is external cognitive power. We have had our brain power where we make decisions, we do predictions, we analyze the world, we analyze data. And we humans, we have maybe the greatest intellect on the planet for thousands or maybe even millions of years when were other types of species. Now we're creating something that is external from our own cognitive power, that can make decisions for us, analyze information at scale. It can do research for us.

8:57

Alexander Fred-Ojala: These technologies today, the only fundamental research I read an article about discovering antibiotics, were effective at treating these bacterias that are resistant to other types of antibiotics. So to find new candidates in that area of research has been very difficult for us humans. But by applying AI to this problem, then you could find relevant candidates for these types of drugs that were effective against these bacteria, and we could find three of them in two weeks. That's like, remarkable, and a revolution. DeepMind solved one of the fundamental problems in biology, whether alpha fold two model, so that it could predict how proteins are folding in 3d structures by only looking at the sequence of amino acids. And there are thousands of these examples that were. If you really look into AI news on a weekly basis, you will see that there are revolutions happening.

9:59

Alexander Fred-Ojala: Most of them are good and positive for the world. So, discoveries in healthcare, how we can combat climate change, how we can solve the housing crisis, how we can automate work, that is tedious for us humans, certain that in five to ten years, we won't need to have truck drivers like we have today. But maybe we people aren't born to sort of sit in a vehicle all day long and deliver goods. So only that we will have the promise of self driving cars has been around for a long time. I think that self driving cars, in most situations, they drive themselves better than how we humans drive them today already. But it's also that transformation going from human driving to self driving or autonomous vehicles. It takes some time, because the society needs to adjust, and there are still risks with autonomous vehicles today.

10:54

Alexander Fred-Ojala: And I think they need to be ten times safer or 100 times safer before we see broad scale adoption. But progress is definitely not going to slow down. It's only going to pick up speed, because AI will help us make progress when it comes to artificial intelligence, which is sort of crazy.

11:11

Monica H. Kang: That's amazing and crazy at the same time. No, thank you for bringing us perspectives and insights before we continue thinking about the future and what we do now, I want to bring us back to the past. I mean, Alexander, yourself, you've been in this space for a while, but I'm very curious if you remember the very first time, how you got into the space of AI in this world of technology. Where were you? Why were you interested? Bring us back to that moment, because I'm curious, how did you even get started in this whole space?

11:42

Alexander Fred-Ojala: That all depends on how far back you want to go. One thing is, in order to do something in this space, then I think it's a combination of knowing a little bit of software engineering or computer science together with mathematics and statistics. Those are sort of the core fundamental properties going into the concept of AI. And I only remember when I got my first computer when I was maybe like five years old. That was one of my first loves. I truly cherished that computer and that I could instruct it to do things, for example, with programming, even at a very young age. I created websites in the early days of the Internet, and only that you had a tool where you could build things, just like you can build things with Lego, maybe in the physical world, or with sticks in the forest.

12:36

Alexander Fred-Ojala: All of a sudden you could build digital artifacts and become good at that. When I started university, then I enrolled in engineering physics. That's what I did for my bachelor's. But all of the courses I had on applied math and applied statistics, they were my favorite courses by far. So when I chose for my masters, I completed my masters at UC Berkeley. Then I chose mathematical statistics. And this was just before data science became sort of a popular term. So I would never have thought that my sort of geeky area in mathematics would become such a globally hyped, sort of sexy term like artificial intelligence. That was not on my roadmap. If you go a little bit more than ten years back, this is kind.

13:25

Monica H. Kang: Of, I guess, the cheers out to all the folks who are studying math, and, jeez, I'm living in a really nice time to be excited about math and know that there is so much potential, as you have pointed out. And it's a really good bridge, because I think one of the things I was really excited to explore more is how you kind of delve into career wise, all these different paths, I think, in addition to kind of, hence your expertise, that you've grown. And so I want to visit some of those chapters because I think it's super fun to learn one, even just about kind of your multiple founder experiences. Co founder experiences. Pradell, the emerging technology advisory firm, you have the master exchange, also the trading platform for music rights.

14:01

Monica H. Kang: I'm curious, how did you first think of these ideas, and what did you know that's the problem you wanted to solve?

14:08

Alexander Fred-Ojala: So in my professional journey then, I've mostly been involved with startups, and I've been very early into those startups. I think it should be like seven or eight companies that I've started from scratch together with other people now. And the first one was when I was still studying for my bachelor's down in southern Sweden. It was a jobtech company where I came in as the founding employee. And then I learned a lot about raising venture funding. I was also part of PR agency down in southern Sweden that spun out a company that came into Y Combinator. When I was part of PR agency, I was at UC Berkeley for many years. There I was a guest professor. I co created the applied data science course and co taught that with the professor, was the supervisor for my thesis.

14:57

Alexander Fred-Ojala: The research director for the data lab was also something that I did. And I worked extensively at the center for Entrepreneurship and Technology, which is a department in the College of Engineering. So they are trying to foster entrepreneurs from the sort of engineering students, I think, that Venn diagram or that overlap of people who have an entrepreneurial urge and also who have some technical, like, engineering skills. Those are people that are very similar to myself. It was one of the best environments that I could ever have worked at, and I learned a lot. But then, early 2020, I moved back to Sweden. Me and my wife, she's also from Sweden. We were going to have our first kid, and we decided that it was going to be easier to do that in Sweden.

15:45

Alexander Fred-Ojala: But for me then, it was a no brainer to start my own company. So I started Predley. It's an emerging tech consultancy. We call it an AI consultancy that is still around. It has many engineers and many big clients today in that company, I've done a lot of executive education. I've helped investors look at deals both in the AI space and in the sort of cryptocurrency and blockchain space. And we have from that company spun out a couple of other companies like a venture studio model. And one of them was Master Exchange. When we got together with the other co founders of Master Exchange, to me, it was such a beautiful idea to be able to empower musicians so that they can turn to their fans instead and say, you can be part of the upside of the songs that I've created.

16:35

Alexander Fred-Ojala: You can co own them together with me. If you invest a little bit, instead of these artists going to, for example, major record labels or signing their full sort of entitlement to royalties and income in the music industry, but instead sharing that with their fans and also creating something like a Robin Hood for music investments. Because I do believe that people have a much greater sort of relationship to music than what they have to stocks and shares that you would buy on a stock trading platform.

17:10

Monica H. Kang: I love that. No, thank you for sharing back and bringing us back down those memory lanes. One of the things I'm hearing as you're reflecting is just that you had that drive to create, to build. Where is that source coming from?

17:26

Alexander Fred-Ojala: That's a very good question. I've always had that type of drive. I want to do something that has positive impact in the world, and there are many ways of doing that. And I constantly want to challenge myself. I want to learn. And then when you're forced to create something yourself and sort of manifest that into existence, then I feel that I'm fulfilled at all levels where I want to be challenged and how I want to operate and what I want to do. With that said, I have many days where I feel that everything is upside down. And when I sort of question and wonder why I'm taken on these challenges, they can be or feel extremely big at times, while other times I'm only dancing around and I feel like the most lucky person on the planet to be doing what I'm doing.

18:21

Alexander Fred-Ojala: But I've had an urge to sort of create movements and manifest things that are ideas into reality and that those things can have an impact.

18:33

Monica H. Kang: Thank you for sharing that. To build on it, I want to revisit your really thoughtful insight in that. Hey, we can all be excited about AI, but the core of if you want to build AI, you do need these technical skills just to make it clear. And I think that's a really key component to address, which is, as we're all learning about this space, whether it's learning about how to be a better entrepreneur, what it means to be a leader, how to be an expert in AI, or learn how to navigate all of that, if they want to be a better AI expert and AI leader and contributor, what do you think? In addition to the technical skills that you just listed, if there's specific ones that we should learn, what are they?

19:10

Alexander Fred-Ojala: Everyone has the opportunity to become really good at utilizing these tools, techniques, the software that is being created today, and the models and algorithms, because this area is very nascent. It's not that it's been around for decades now. People are still figuring out what we can do with these very powerful models and algorithms when it comes to applications and how it's going to work. So it's almost like when the Internet came out. I talked about that before, but when we had the global rollout of the Internet in the 90s, then one of my best friends. His older brother was only 16, but he made a lot more money than most of my peers and my friend's mom or dad, because he was creating websites for other companies, and every company back then wanted to be on the Internet.

20:05

Alexander Fred-Ojala: We could also see that with people who started app development companies. They were doing app development for the iPhone when that came out. We're going to see the same type of revolution today. The companies are going to try to figure out what they can do with this technology, how to implement it, and it's accessible to everyone to be part of that journey, because it used to be the case that you had to be able to program, for example, in Python or in C or in Java in order to create some software. I know maybe three or four programming languages fairly well. Now I can program in almost any language by utilizing tools like GitHub, Copilot and chat GPT, asking questions about, okay, how do I write this programming syntax in the language go?

20:52

Alexander Fred-Ojala: Or how can I do it in Scala, or translate this code from Python to Scala? You can utilize AI to become really good at AI. The same thing that I said, that AI was sort of self improving on top of itself. Then today, if you want to build a retrieval, augmented generation systems, rag systems are state of the art for being able to fetch specific company data, let's say, from an ERP system at a company where the models, they cannot be probabilistic, so they cannot guess what your revenue numbers were sort of the past quarter, or get that wrong. Then you can complement the GPT four, sort of the large language model with a rag system, and then it can take any type of information that needs to be some sort of ground truth. It's able to fetch that in a good way.

21:44

Alexander Fred-Ojala: This is something that could be relevant, or that I would say is relevant for most companies that we have on the planet. But to build a system like that, you need to spend adamantly a week of time and learn these technologies, and maybe have long sessions where you ask questions with chat GPT. But if you are curious, and if you're willing to pick up at least basic technical skills and terminology and understanding, I would say that the barrier of entry into the field of AI today is a lot lower than what the barrier of entry has been to all other fields of technology. Because now you can work with these tools in order to become very effective and valuable in the outputs that you.

22:23

Monica H. Kang: Create to build on that. I wonder then almost then, for AI experts who's been around the corner, feel like, wait, now there's more people around here, is there anything they need to upskill even more because they should project and share their expertise.

22:40

Alexander Fred-Ojala: Everyone is getting disrupted, right? To some extent. But traditional machine learning systems and the way that you deploy them, also all of the terminology and the concepts that you need to know in order to build systems like this at scale and secure systems that's still not accessible to everyone right away. I mean, we are lowering the thresholds and only a reflection I have in running my own companies today. So we have made it mandatory for everyone who is doing anything related to software engineering, technical development. But even our lawyers and our designers, all of them are using AI tools today. And it's not a question of like, okay, some want to do it and others don't like to do it. It's mandatory to use it.

23:28

Alexander Fred-Ojala: And one sort of signal or a pattern that I've seen is that the senior engineers, they are the ones who are most reluctant at changing their processes and their habits and their behaviors. So they are usually the last ones who will buy a chat ept pro license or Gemini advanced or GitHub copilot, that they will actually install it and use it in vs code. But even the lawyers that I work with, et cetera, everyone can see that there is so much value at utilizing these tools, and especially for everyone who is in a junior position. If you're just entering the job market, if you land your first job, all of a sudden you have an oracle that you can ask questions that can help you produce output.

24:22

Alexander Fred-Ojala: No matter if you're doing marketing, if you're doing design, if you're doing software engineering, if you're writing essays, if you're a copywriter, everyone has these tools in order to become more effective, more knowledgeable. Now with that said, in areas where I have a little bit of deep expertise, then I can see that many times these models will give me answers or outputs that are a little bit faulty. And maybe the junior engineers, for example, they cannot do that quality assurance assessment, but that is also something that you get better and better at. It's sort of remarkable how we're democratizing knowledge and also the ability to create something valuable. I love that.

25:07

Alexander Fred-Ojala: Sam Altman, a couple of weeks back, so the CEO of OpenAI, he said that we will see the first one person billion dollar companies because it's possible to run a company as a solo entrepreneur and do that in the same way as if you would have needed, I don't know, 2000 people back in the, maybe 50 people at least three years back. Now, together with the AI tools, you can actually be by yourself and handle everything that is needed in order to run a unicorn company.

25:42

Monica H. Kang: Wow, that's a real game changer and perspective shift in how we position and show up. The other role I'm curious is for those who want to be a better entrepreneur, you have been doing that several times as a serial entrepreneur. What advice would you share and what skills are going to be even more important because of all these changes and shifts going on?

26:04

Alexander Fred-Ojala: The first advice is just tackle the challenge that you're passionate about. Don't question, should I be an entrepreneur or not? If you have any type of burning desire to actually do this, then you should definitely jump at the opportunity. I have so many friends who would have said many years that they would love to be entrepreneurs, but now they're sort of satisfied. They have job security, they might have started families, et cetera. And there's never a good time to sort of jump into an entrepreneurship instead of pursuing maybe a more traditional career. But just do it right away and don't think about it too much. Then if you're a first time founder, you're going to encounter a lot of problems. You're going to learn a lot.

26:53

Alexander Fred-Ojala: You might actually be successful with your first venture, but the probability or the statistics around it say that most founders, at least they fail. And it's okay to fail, but you should see it as a learning journey, and then you should tackle a new opportunity, and you will meet people along the way during these adventures that might become your best friends, or at least colleagues that will contribute with vast amounts of value in future journeys. And everyone, no matter if you're a traditional company, if you want to be an entrepreneur or what you do in life, experiment with AI tools. So I talked about the habit of using AI tools.

27:37

Alexander Fred-Ojala: I have to remind myself that I should go into perplexity, or I should open up chat GPT, that I should go to Gemini or use GitHub Copilot on a daily basis more than I use Google, et cetera. Because I really want to get into the habit of utilizing these very powerful algorithms, because as we said before, progress is not going to stop here, so it's going to vastly affect how we live our lives and how we interact with technology, products and platforms. So if you're getting used to that early on, I know that I had great benefits of being able to use search engines when I was a kid because it was very easy to look up facts about Shakespeare. Other people went to the library, and I could do that on Alta Vista or Yahoo or early Google.

28:25

Alexander Fred-Ojala: And we have the same type of benefit today, that if you're using these AI algorithms, tools and models, then you will be able to be more productive, more effective, and more knowledgeable.

28:37

Monica H. Kang: Wow. Thank you for sharing those. I can't believe how fast time has flown by. We've been enjoying the conversation so much. Thank you for sharing all your wisdom and insights. As we wrap up, few final rapid questions to build on is as I'm listening to all this, I think the number one question also in the back of my mind is like, how in the world is Alexander managing all of this with his time? And he has a life to mention, as you heard, as a family. So, any tips that you share in how you balance your personal and professional life when you are so involved also in your professional life?

29:10

Alexander Fred-Ojala: I think it's very important to know where your anchors are in life. And for me, I wind down with my family, I wind down with friends. I wind down through meditation. I wind down through exercising, by running or going to the gym. And I need to do several of those every day. And also to take care of myself by trying to sleep well, it's very hard or difficult now I'm by myself with both of my kids, so, crazy nights. But as long as you sort of have some stability and you feel that you're energized, and then you should also do work and work together with people, if possible, that give you energy instead of drain you of energy. And I'm very fortunate to be in a professional setting where that is the case for me.

30:02

Monica H. Kang: Love that. Alexander Fred-Ojala: But yeah, some days I can also be exhausted.

30:08

Monica H. Kang: Second to last question. What's a piece of final wisdom that you want to share with our listeners no matter where they are in their journey?

30:15

Alexander Fred-Ojala: Last or parting words? I would like to say to everyone listening to this, that we have the opportunity to create utopia with this technology. We can be so much more productive, we can be so much more knowledgeable, we can learn so many new things. We can create wisdom with this technology that wasn't attainable before. And that is only if this is applied in the right way, by people who have the heart in the right place, and also by people who are ambitious, who set lofty goals. But we should know that we can truly create global abundance with this technology. And we assume, as we could have better health prospects, we could live happier lives, we could tackle global challenges that we have, like climate change, housing, cris, diseases, et cetera.

31:14

Alexander Fred-Ojala: AI will be able to help us with all of that, but it's about utilizing it and someone needs to work on it. So I would say that everyone who is excited about it should jump into the pool and start swimming in the right direction as a global species or as humanity.

31:33

Monica H. Kang: And last but not least, what is the best way our fans and friends can follow up with you to stay in touch with you?

31:40

Alexander Fred-Ojala: You can connect with me over LinkedIn or you can follow me even though I don't post that much on x nowadays, formerly known as Twitter. And yeah, if you really want to have a deeper conversation with me, then ask me for my email when you have connected with me over LinkedIn.

31:59

Monica H. Kang: Perfect. Well, thank you so much Alexander, for joining us. And thank you all for taking a moment to tune into our conversation at dear workplace as we untangle and navigate what in the world is going on with AI and how do we upskill and learn what's going on? Thank you Alexander, for sharing your wisdom with us again. Folks, you know the drill. We share our show notes and links in our blog, so come find it. If you can't, send an email at info@innovatorsbox.com and I will see you soon.

32:27

Monica H. Kang: Thank you. Thanks Alexander again for those wisdom and insight. You could really tell and hear the teacher voicing him because he's really passionate about wanting to make sure AI and its innovation is more accessible and democratized among both technical and non technical audiences, which was perfect for our episode and conversation. Thank you again for stopping by and sharing your wisdom with us, but we haven't finished learning. We want to continue on. So I have invited my other friend, Stephanie Wong in Silicon Valley, Google, to be precise, who can share a little bit more about the type of AI work she does and how she even got to the work of Google AI.

33:11

Monica H. Kang: Now, of course, because of the type of work they do, she can't talk specifically about some of the Google AI work, but there's still a lot of things that she can share and help us better understand what is going on with AI. Well, about Stephanie, there's a lot of things you want to get a chance to learn about. She is a seven times award winning speaker, AI product leader, investor and a creator with over ten years of experience at major high technology firms. As a leader in Google with a mission to build the future of AI while blending storytelling and technology to create remarkable online content, she has brought unique combination of technical expertise, world class public speaking skills, sales and marketing expertise, and has the ability to lead organizational change.

33:59

Monica H. Kang: She's building the first generative AI developer products at Google and during the rapid dawn of Gen AI across the industry has given her rare insights at the ground floor. She has deep experience leading developers go to market and product strategy as the first head of developer engagement, acting as their voice of developers for cloud computing. She's also created over 2000 videos, blogs and courses, tutorials and podcasts and posts to have helped developers learn fundamentals, solve their toughest challenges, and pass certificates all around the world. You'll find her YouTube channel and her shows where she talks about a lot of these themes with her role as an AI product leader. So with further ado, I'm very excited to have Stephanie because the other bonus you want to catch at the end is how she got into this space, which was very special. So meet Stephanie.

35:04

Monica H. Kang: Very excited to have Stephanie Wang here all the way from Silicon Valley, San Francisco. Thank you so much for joining us. Stephanie. First question for you as we are making sense of this whole AI world. Are you worried? Excited? I'm curious, like your gut instincts, what is going on? Because as somebody who's working in AI, I wonder if it feels weird to suddenly, excuse me, we've been doing this for a while and you guys are just now picking up.

35:28

Stephanie Wong: I think it's easy for a lot of people to have natural concerns, or AI, but after having worked in AI specifically over the last year, but really it's been around my career for over ten years, I have never seen this type of velocity targeted at one space so much at one time. So I personally am very excited about the possibilities of AI. As many of you know, there's immense potential to improve productivity for all industries of work, from application developers to content creators. And those are two areas that I'm invested in as someone who's in tech and also a creator myself.

36:04

Stephanie Wong: And I've already seen spikes in productivity for myself, especially in content creation, with all of the media intersection with AI to help with scripting, to help with media creation, images and broll transcription services, and you're going to start to see that across all industries and sectors. So I'm specifically very excited about AI. Maybe some areas I'm worried about is I think a lot of people don't realize that when it comes down to the quality of AI models, it's really equivalent to the quality of data. And so it's really important for people and companies to have a healthy amount of skepticism for model outputs. And they need to realize that many leaders and ICs that companies should be very involved in the model consumption as well as model creation.

36:53

Stephanie Wong: And so there needs to be specific attention towards the data quality that goes into feeding models, especially for fine tuning your own models or adding in your own company data sources.

37:05

Monica H. Kang: Well, I want to piggyback it back on that last point, because I think, as a nontechnical person, still like making sense of it when we talk about the importance of this data. I think the two things I consistently hear building off of what you just shared is that it has to be a lot. It has to be very thought out, and it's very, as you have pointed out, very intentional, and I think as maybe an excuse my ignorance, but I want to speak on behalf of the others as well. How do you know what is a ripe, good data? Because it's not just about, as you said, quantity, it's finding the right quality. How do you know it's the right quality?

37:38

Monica H. Kang: Because I think, so far, one of the things that we're seeing more with just all technologies, just normal misinformation in the sake of wanting to get more things out there, so help us make sense of it. How do we know it's the right data?

37:51

Stephanie Wong: Well, one of the most powerful things about LLMs and foundation models is that they are encompassed full of the Internet's vast amount of data. And for companies, what's LLM? Oh, large language model. So it's essentially a foundation model or a base model that is trained on all sorts of data from the Internet's vast wealth. But it can also be made up of multimodal sources, too. So you can essentially translate media or audio or video data, and that can go towards a foundation model. And essentially, when the model makes predictions like, what's the next word in this sentence? Or help me create a story about my sister and I in Hawaii, it is essentially doing next word or a next token prediction based on what it's learned from this vast amount of data.

38:42

Stephanie Wong: So you can think of it like it's consuming information from thousands of books and it's learning from the environment around it to then make predictions on how to create what's next. And creation can be in the form of multimedia or text, as you've seen. So one of the things I'll say is models consume so much information, and many companies think, well, how do I gear the model towards some specific task, like, let's say you're a medical company or you are a legal company, and you want to make sure it's highly accurate in your domain? Well, you can't expect that if you train a puppy, it's going to understand how to sit speak if it's a parrot, say the words that you want. And so, training a model is often like training a puppy.

39:31

Stephanie Wong: You have to be very specific in what you're feeding it, the inputs that you're feeding it, so that it can perform the way that you want with good behavior, according to human values.

39:39

Monica H. Kang: Right? Stephanie Wong: And so I think for many people out there, when they're thinking about model quality and data quality, think about the gaps in performance. It's important to test the model against a set of instructions or tasks that are important to you, that you want it to perform well in measure and understand where the gaps exist, so that when you think of data quality, you can really think of targeted improvements. And then in terms of format of data, it's often giving it a whole set of data to do additional training on the model.

40:09

Stephanie Wong: Or you can do something called instruct tuning, where, for example, you can give the model very specific question and answer pairs for a task like help me format this patient data or this patient billing data, it should look like this, so that way the model can understand, okay, this is the style, the format, and the structure that this answer should be like. And you can again start to gear the model towards more accurate responses in the correct formats, according to human values, and reduce that inaccuracy or hallucinations, as we call it in the industry, when it's making up answers that don't exist. And there's other techniques that are coming out currently to help improve model quality, like something called retrieval augmented generation, which I can talk about later, but lots is happening in the space. It's moving quickly.

40:54

Monica H. Kang: I'm curious, what does that mean?

40:56

Stephanie Wong: So the acronym for it is RAG, and it's an area that I'm quite excited about right now, because it is a lower cost and really helpful way to make models more accurate with more recent information. So, as you can imagine, models are trained, they're fine tuned, they're instructuned. This all takes vast amounts of compute resources to do so. But let's say you want it to have access to more recent information that's found on recent articles, or access to company resources or data sources that change very frequently. So retrieval augmented generation is a way for, essentially the model to, when a prompt is submitted to the model, it can go and retrieve these external sources or retrieve these more recent information from other sources, like a different database. And so this is a way to augment a model's capabilities and do it with fewer resources.

41:49

Stephanie Wong: And so this is not a replacement for training, but it's another technique that companies can use to allow it to have access to their code bases or their data sources, or access to more.

42:01

Monica H. Kang: Recent information as a nontechnical digesting it. I think one visual. I love your puppy analogy earlier, by the way, and I keep thinking, correct me if I'm wrong, but is it a right way to say if the first part of example you shared was like, train the puppy in one or two consistent action and behavior? This model that rad you're describing to is like, now that the puppy is trained, even if the lease is just often running around, it will know how to come find back. You don't have to tell every street, okay, walk this street. Don't walk that street. It's not that. It's like gaining that agility to have that flexibility. Is that kind of how I can understand that?

42:37

Stephanie Wong: Right. And you can imagine as young children and infants, the way that humans learn in the world is to have exposure to various circumstances and scenarios. So it's the same similar concept with models. It's how much exposure have they had to these scenarios? For example, have they had exposure to performing complex mathematical calculations? Has it had exposure to not just creative tasks, but highly logical tasks, but beyond just calculations? Maybe it's a deep physics problem, or you want it to be able to consume an image of someone's math homework, transcribe it into text and numbers, and then understand where that student got it wrong, and come up with correct answers. So it's really about providing examples for it to learn from.

43:24

Stephanie Wong: And as we target it towards all these specific use cases and tasks, it's about, again, tweaking the model ever so slightly to make sure it's doing it the way that humans expect.

43:35

Monica H. Kang: One thing that's coming to mind as you're sharing it is just like how many skills and knowledge the creators of the AI team members have to be. I'm curious, like, for you, having been in the industry and also being in that environment, what are skills that you think are going to be even more in demand to continue to be a good AI? Whether they are creating the whole models or needing to put in the data, what would you envision to be even more important?

44:00

Stephanie Wong: Yeah, I mean, it's interesting because AI and their tools will be so pervasive across all industries and types of roles. And so, as you said, there are folks that are helping to create AI tools within companies for the first time, and there are consumers of AI tools. I think from my perspective, those who are helping to create AI tools or even just prepare data quality for AI tools. They're having to learn the same concepts that we're talking about today, which is what actually goes into supporting an AI model's outputs and what matters. And how can I adjust my workflow, whether I'm a software developer or a web designer, to make sure that it's providing the correct outputs?

44:43

Stephanie Wong: And so that can involve preparing data sets, knowing when to use, instruct, tuning data versus using rag how to perform prompt engineering to adjust the model's outputs. And we didn't even talk about that. Prompt engineering being a technique that many folks are learning about today. How can I give instructions in the prompt to the model to get it to do what I want? Essentially to break down issues into multiple parts. And so there's lots of techniques that folks need to learn about to make sure they really understand what's happening behind the scenes with models and not just taking things at face value. I also think for consumers of AI tools, just staying on top of what's out there, because many of you might have heard this already, it's not AI that's going to replace your job or take your job.

45:30

Stephanie Wong: It's those who learn how to use AI effectively who are going to upskill, become more productive, and maybe accelerate faster in their roles. So when you're thinking about utilizing sales tools, digital marketing tools, content creation tools, social apps, multimedia productivity apps, using AI to help with text to video to video, multimodal, whatever aspects of your job you think can be accelerated by tools like that, go ahead and explore them with a healthy amount of skepticism, but also a healthy amount of curiosity in how it might be able to facilitate your workflows.

46:08

Monica H. Kang: I want to piggyback on your wonderful comment about the upskilling. Eda, tell us a little bit more, both technical folks and non technical folks, what would upskilling an AI look like?

46:19

Stephanie Wong: Upskilling an AI for technical folks, as I mentioned, is going to involve, again, data quality and things like prompt engineering. So if you are a technical software engineer who is now tasked with helping to build an internal tool that manages company HR data, you're building a chat bot that helps answer questions around HR data, like how many days you have left to take off and perform calculations for employees. You are still responsible for understanding the business logic, right? And the model is new to this, so that part of your job likely doesn't change, and how to implement the best way to perform that kind of calculation and make it the most useful for your users. Now, the new part of upskilling in the AI world is how do I understand the various aspects of building an AI tool?

47:09

Stephanie Wong: Between the instruct, tuning, the prompt engineering, et cetera, that as an engineer I can utilize effectively. One of the things that I've seen is just an immense amount of experimentation it takes. I think there is a misconception that models can do anything. You just throw at a task and it will perform fairly effectively. But in many tasks, for a very specific domain knowledge like company's HR system, it's going to have to learn and it's going to have to take a lot of experimentation. So getting into the rhythm of evaluating a model, defining metrics for the model, what constitutes accurate and okay versus not okay, and what are the risks when it is inaccurate, what is the level of risk that we can handle as a team for this particular use case? If it's medical data, probably the tolerance is very low.

47:56

Stephanie Wong: For HR, maybe it's a little bit higher tolerance. Really getting together with the team and defining the core metrics that matter. And then from there, performing experimentation, constantly measuring the model's outputs, adjusting as needed over time, and then just continuing to iterate from there. So it's going to require a whole level of dedicated practice around this.

48:19

Monica H. Kang: Thank you so much for breaking that down. And again, I'm just thinking about how fast technology is changing. I mean, I still remember, I think it was way back in, even elementary school, it was like a cool thing to know how to type. It was a cool thing to know how to text. Nowadays, it's like, what? Especially for those who are leaders, they've actually seen so much change even before that. I was just actually at the computer history museum when I was in San Francisco last time. And it was just like a humbling reminder to be reminded that, yeah, back in the day, computer was new, Internet was new. And I think as we think about the AI and everything that you're sharing, I'm just being reminded of how it's so important to keep things into perspective.

49:03

Monica H. Kang: As you have pointed know, as somebody who has been in this space, there's been a lot of changes, but also perspective wise, hey, we still got a lot to go. And so I'm curious, for folks who's wanting to get ready and be on top of it this year, what would you point out and says, like, okay, you might want to look out for these things. If you want to be on top of AI, what are trends that you would point out?

49:24

Stephanie Wong: Well, first of all, I was just speaking to my fiance last night about what we think the world will look like in 100 years. And were both discussing that 100 years ago was just the advent of automobiles, for example, and other modes of transportation. And then all of a sudden, in the last couple of decades, we have accelerated innovation to a point that most people back then would have never imagined. And so you can imagine in 100 years, it's going to accelerate at an exponential rate. So it's hard to say exactly how you can prepare for that future. But I can tell you, starting now, I think people who are interested in the technology sector, or just want to be consumers of it to stay on top of tooling.

50:05

Stephanie Wong: I mean, I think a lot of people who are using the standard tools, Google Docs, productivity tools, how can you augment that further? By understanding in your current day to day workflow how you can utilize AI, whether it is. For example, I have friends who blog often, and instead of going to chat GBT and typing in a prompt and asking it to help draft their blogs, they use an audio to transcription service on their phone that allows them to just babble into their phone whatever is on top of their mind. And it helps format that messy thought process into social media. Posts, a blog post, press release, what? And so it's just being on top of the tools that are out there.

50:49

Stephanie Wong: And one of the ways that I can do that often is just staying on top of the groups that I'm a part of on LinkedIn, on X, on social media, just kind of keeping an ear out or an eye out for the latest tools by following people or groups that are following the latest and generative AI or AI. So be a part of the community, join communities, talk about it with friends, and just generally stay up to date with the latest that's happening.

51:14

Monica H. Kang: Could you shout out to some groups that we should follow and check out?

51:19

Stephanie Wong: Oh, well, there's actually a LinkedIn group that has, I think, over two or 3 million followers now. It's just called generative AI. And you just see the latest innovations and interesting developments in the space. So just check that out and then if you follow me, shameless plug on LinkedIn. I share often my learnings about AI and technology news, including AI risks and what AI hackers are doing these days, and how you can kind of prepare to remain more vigilant out there.

51:53

Monica H. Kang: So building on it, I think one of the things that's exciting, and I'm feeling hopeful is that it's never too late to learn. And upskill. I think one of the things that I was also really excited to have you was also your journey into tech and AI, because if I remember incorrectly, it didn't necessarily start that way, even though you had a peak interest for a long time. So could you bring us how you got into tech?

52:16

Stephanie Wong: Absolutely. So I didn't have a concerted plan to enter tech. I actually majored in communication studies and my minor was in digital humanities, so there was a tech element there. But my first job was at Oracle as a customer or a sales engineer, where I was able to work with customers and also be a technical consultant and learned a lot on the job in terms of getting that first exposure to enterprise technology. And I continued to gear myself towards cloud computing, platform as a service, infrastructure as a service, and that led me to getting recruited by Google to work also as a sales engineer. And so I started from the different kind of CS degree communication studies into a world of folks who are the most brilliant engineers and real actual CS degrees, computer science.

53:03

Stephanie Wong: And so it's just been a long journey of me continuing to place myself in uncomfortable situations, know that I can build strengths in helping to communicate amazing technology stories, and eventually led to the AI product world. That transition consisted of creating content about technology and gaining more visibility within Google to create content for all of our cloud products for developers. And then over the span of six years, eventually continued to get opportunities landing on my lap, including the opportunity to work on generative AI products at Google. So if I can make it, I think lots of people can. I would say continue to stay encouraged that as long as you say yes to opportunities that both challenge and excite you and continue to place yourself in uncomfortable situations, then you will magnetize all of the amazing opportunities coming towards you.

54:01

Stephanie Wong: And so I would just say keep learning, keep putting yourself out there, connect with networks that you have online and in person, and ask for those stretch goals.

54:11

Monica H. Kang: Love that. Thank you so much for reminding us that it's okay. Would you be willing to also share your little fun time doing the pageant experience too?

54:18

Stephanie Wong: Oh, yeah, absolutely. So I started also unexpectedly. I was part of the San Francisco community, born and raised, and Miss Chinatown was always a part of the chinese new Year parade. And I've always looked up to it. So I decided that I'll just give it a shot. And unexpectedly, I actually won that pageant and was introduced to this amazing community of very successful, talented women across technology, across food, across their own startups, in fashion. I mean, it's amazing. So through that, I've been able to get more involved in the local community here and then did a few other pageants as well.

54:53

Stephanie Wong: And what I would say is that it put me out there once again, putting me into these situations that extended myself into areas of discomfort and growth and was able to really hone my skills, presenting on stage about causes that I cared about, connecting with people that I normally didn't, and speaking about technology and what I'm doing to a wider audience. So again, if there's any opportunities for people out there to just get involved in communities, meetup groups, present on a topic that you just learned about to a small group or online, you would be surprised at what it might lead you to. And one of the reasons why this experience was so pivotal for me was I was trying to try out for the warriors dance team and other dance related things and got rejected to all of them.

55:38

Stephanie Wong: And so I was pretty sad about that. But then it opened the door for me to do these pageants and it totally changed the trajectory of my personal and community oriented life outside of work. So I would say just take a chance.

55:51

Monica H. Kang: Thank you so much, Stephanie, for reminding us that, hey, there's really no one path, whether it's learning to AI or finding the voice to advocate for the things that you're learning. We're so grateful you're here. Thank you so much for dropping us so much wisdom and insights. Two final questions is one, any final words of wisdom you want to share with innovators out there, no matter where they are in their chapter in learning to maybe AI or innovation? And two, what's the best way we can stay in touch with you?

56:16

Stephanie Wong: I would say one of the most interesting lessons I've learned from my manager and my team over the years is this concept of the first pancake principle, which means that when you are putting things out into the world, and this is related to personal branding and your work in general, you are often worried that it's not good enough when you're learning a new skill set like AI or you're creating content or whatever it may be. And I've faced many bouts of imposter syndrome as I've pivoted multiple times in my career. But the first pancake principle teaches you that when you first create a pancake, or you cook a pancake, it's usually burnt, it's too hot, it doesn't end up great, but you just keep adding in more and you flip them and you eventually get good at it.

56:56

Stephanie Wong: And so it's this idea of constant iteration and not worrying about the results of the first try that you do something. So if it is working with a new group, like for me, entering engineering and product or putting content out there for the first time in my role. Just remember that people's attention spans are too small to remember your first try or your first dozen tries. So just get yourself out there. Put together that application you've always been thinking about building, or a website where you can host your most exciting work, or entering a new team that is challenging to you that's related to AI and see what happens. I mentioned that I am really active on LinkedIn, so follow me on LinkedIn.

57:37

Stephanie Wong: My username is just Steph Rwang and that's my primary place and you can find my website, stephrwong.com and from there you can reach out to me as well.

57:50

Monica H. Kang: And folks, you know the drill. We usually always put all of these links in the show notes, so we'll better make sure we do a little shout out. Definitely follow and check out. That's actually how I connected with Stephanie. She graciously said yes to being on the show. And so Stephanie, thank you so much for sharing all the wisdom. I will make sure I have all the other links that you mentioned and the gen AI generator of AI on the LinkedIn post too, so we can hope to know where to follow. But thanks again for tuning in. We will be back again with another story, but thank you, Stephanie.

58:19

Stephanie Wong: Thank you. Monica H. Kang: Thank you, Stephanie for inspiring us and reminding us that there's many ways to get into technology and AI. And it's not too late, you can do it too. This is your host, Monica King, and you've been listening to dear workplace. I appreciate you so much for joining us as we continue on the journey of learning AI, how to make sense of it, whether you're technical or not. And next week, I have a very special episode coming up. It's actually going to be my first, I guess, kind of mini live virtual blogging, if that's a way to talk about a podcast, because I am going to be hosting an event by the time you listen to this, it will be, I guess, few days before the event.

59:04

Monica H. Kang: Because on March 25, I am bringing 60 plus leaders and innovators in the community to talk about AI and technology and how as women, we can thrive. So this next episode, next Thursday, that means you're going to get a sound bite and insight into what we learned, what we noticed, and what happened at the event. Curious to learn more? Go subscribe today and don't forget to join us next week at Dear Workplace. Thank you again, Alexander and Stephanie for joining us, and thank you all for tuning to another conversation. Have a wonderful day and I'll see you next week. Hey, thanks so much for tuning in to another episode at Dear Workplace by InnovatorsBox and your host Monica Kang me. I hope you enjoyed today's conversation.

1:00:05

Monica H. Kang: Today's episode is possible thanks to a wonderful team who has dedicated their time and making sure you hear the quality research that you heard today. Want to shout out to audio engineering and production lead by Sam Lemart, Audio Engineering assistant by Ravi Lad, Website and Marketing Support by Kree Pandey, Graphic Support by Leah Orsini, Christine Eribal, original music by InnovatorsBox Studios, and Executive Producing, Directing, Writing, Researching hosted by me, Monica Kang, Founder and CEO of InnovatorsBox. Thank you so much. Your love and support and sharing means the world to us. Please send us any questions and thoughts you have and what you want to learn more or next and we'll dive right into it. Thank you and have a wonderful day. See you soon.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features