U.S. markets closed
  • S&P Futures

    3,831.00
    -3.00 (-0.08%)
     
  • Dow Futures

    30,901.00
    -35.00 (-0.11%)
     
  • Nasdaq Futures

    11,808.50
    0.00 (0.00%)
     
  • Russell 2000 Futures

    1,738.30
    -3.40 (-0.20%)
     
  • Crude Oil

    100.04
    +0.54 (+0.54%)
     
  • Gold

    1,767.00
    +3.10 (+0.18%)
     
  • Silver

    19.04
    -0.01 (-0.06%)
     
  • EUR/USD

    1.0264
    -0.0006 (-0.06%)
     
  • 10-Yr Bond

    2.8090
    -0.0800 (-2.77%)
     
  • Vix

    27.54
    +0.01 (+0.04%)
     
  • GBP/USD

    1.1962
    +0.0010 (+0.08%)
     
  • USD/JPY

    135.2420
    -0.6000 (-0.44%)
     
  • BTC-USD

    19,872.39
    -393.98 (-1.94%)
     
  • CMC Crypto 200

    430.07
    -9.95 (-2.26%)
     
  • FTSE 100

    7,025.47
    -207.18 (-2.86%)
     
  • Nikkei 225

    26,082.73
    -340.74 (-1.29%)
     

Demo Derby - How startups are disrupting the status quo with innovative data analytics, AI and modern app development

Startups need to move quickly and focus limited resources on areas where they can differentiate. In this fast-paced session, learn from startups and Google experts how you can leverage Google technologies to serve customers better and get to market more quickly. In a series of short demos, see how innovative startups and Google experts have used Google compute, storage, networking and AI technologies to ‘disrupt’ the status quo.

Video Transcript

[MUSIC PLAYING]

- Hey, folks, I'm Dave Elliott, and welcome to the TechCrunch Disrupt session, sponsored by Google Cloud. Super excited here to show you some really cool demos. So instead of just going through a bunch of slides, what we're going to do today is walk through some actual real live demos from startups like yourself, and from some of our AI engineers.

So as we think about what we wanted to show here, what a startup really cares about, what a founder really cares about, is focusing on how they can differentiate themselves-- how you can differentiate yourself-- and only really focus your development efforts in areas that you really are going to be different. So we're going to have a few demos here that kind of show you how you can get to focus on what matters to you, where you really can be differentiated.

So we're going to start with Andrea Le Vot from BlueZoo. They're a startup-- an insurtech startup-- is building telematics for buildings. So, the ability to understand and measure people in buildings.

Then we're going to move on to Zach Akil and Dale Markowitz, who are to applied AI engineers at Google, who are going to show you how they can use-- how you can implement AI in your applications without being a PhD in machine learning, without being an expert. How you can quickly get to actual value using AI and ML in your apps. Then we're going to move on to video Darren-- I knew I was going to mess that name up-- who's going to show how you can improve developer productivity using serverless. And and each one of these are going to be quick, five to seven minute demos that that are really going to be focused on you showing what was built and how it was built. So with that, let's go ahead and jump to Andrea with BlueZoo.

ANDREA LE VOT: I'm Andrea from BlueZoo, and I would like to share with you how partnering with Google helps us in our business. BlueZoo's objective is to revolutionize the world of liability insurance for commercial properties. In the same way as telematics has transformed the market for automotive insurance, BlueZoo is working to transform the market for commercial property insurance. We help insurers to characterize risk based on the usage of the building.

The concept of usage-based insurance was the first problem for automotive insurance. Insurers wanted to distinguish between the more risky drivers and the less risky drivers, to set policy premiums accordingly. Collecting data about driving behavior, such as how many miles people drive and how fast they drive, gives insurers greater insight into driver risk than general information like age of owner or years of driving experience. Insurers want to pursue the same strategy for commercial property insurance.

An important factor for risk is occupancy. Risk engineers want to know if fire marshal limits are respected or venues are chronically overcrowded to evaluate risk. And this is where BlueZoo steps in.

Our technology records how people move about the space by measuring Wi-Fi signals. The solution begins with the small hardware sensors that listen for Wi-Fi probes, spontaneously emitted by our mobile phones. Our sensors compress and encrypt these probes and forward them to our cloud servers, to deliver analytics to risk analysts, underwriters, and actuaries. The sensors accurately measure occupancy and related metrics like visits, dwell time, and unique visitors, covering areas up to 300,000 feet.

All user products respect consumer privacy and are certified GDPR compliant by a German auditor. Bluetooth technology has been proven in other industries, including advertising, hospitality, and retail. Let's now have a look at our hotel case study.

In the hotel, the more people you have in common areas, the more risk you're underwriting. So BlueZoo puts censors in the common areas of hotels, including restaurants, ballrooms, meeting rooms, fitness centers, and the lobby. By collecting data both over time and across multiple properties, insurers can get a very good idea of what normal occupancy looks like and identify outliers with a higher risk.

Let's have a look at the data in our analytics dashboard. This dashboard is designed for analysts and risk engineers who search for patterns and anomalies. Actuaries possess historical data to create pricing models that are the starting point for underwriters who use the data to set pricing for individual policyholders.

The web dashboard, built with angular JavaScript, pulls data from user's BigQuery data warehouse. On the left, you can see a pane with three sections vertically. For this demo, we'll focus on the first section.

Home summarizes current user status and recent sensor calibrations and campaigns. My user account, Janet Smith, is one of four users and her account has 43 censors. Spaces are the physical locations where sensors are placed.

Spaces contain any combination of sensors and spaces. Each sensor is located in a single space. A group is a collection of sensors. One sensor can be in several groups.

Let's imagine we are analysts and we are looking for patterns of risk. Let's start with Marriott Downtown. Marriott Downtown has three floors.

As you can see, floor one has four spaces. Each was one sensor. These spaces consist of one Starbucks, two bars, and one front desk or lobby area.

Let's look at the historic accounts of visitors to Bill's West Side Bar. For this bar, it seems that occupancy is pretty steady throughout the week, peaking at about 80 people on weekends, and lower during the weekdays. Now, let's compare this to other bars.

Let's go to groups, and then to the group bar and lounge, which is composed of bars of similar size and capacity across multiple Marriott hotels. Let's compare bar occupancy. Occupancy looks pretty steady for most bars during the week, but wow, look at this huge peak at Ditmars Bar & Grill. It's really busy there on Friday night. No other venue is such an outlier.

The question is, is this single day just an anomaly? Let's see. So in the week of August 1, the bar's very busy on Friday night. And when we have a look at the previous weeks in comparison, we can see that the occupancy is in fact very high on every Friday.

Apparently, Ditmas has a recurring problem of overcrowding, well beyond the fire marshal's imposed a limit of 100 occupants. High occupancy correlates to good revenues, but also correlates to increased risk of slip, trip, and fall, and other liabilities. It may be time for the risk manager to speak to the hotel manager about Ditmas' crowding on Friday evenings.

With BlueZoo, it would be easy to keep track of progress to remediate the problem. By collecting new data for occupancy, insurers can better measure risk and distinguish more risky customers from less risky customers. Area and revenues of the business are no longer the primary criteria for estimating the risk posed by a customer and for setting premiums.

Just as for automobile telematics, insurers have new, objective data based on usage. This new source of data enables insurers to establish a baseline and identify outliers, as we just did. In my last slide, I would like to show you the Google technologies that make our products possible.

BlueZoo uses different types of sensors to measure risk. Our upcoming privacy centric optical sensors will employ current GPUs to run our machine learning models at the edge. BlueZoo uses BigQuery as our data warehouse to maintain high performance as we scale. BlueZoo uses Looker to provide analytics to our clients.

[MUSIC PLAYING]

DALE MARKOWITZ: Hey, my name is Dale, and I'm a machine learning engineer at Google Cloud, where it's my job to build applications using ML as quickly and efficiently as possible. So today, me and my colleague Zach are going to show you our favorite Google Cloud ML tools and how we use them to build apps really quickly. And we're going to start with my favorite tools called Video Intelligence AI, which uses AI to analyze videos. And this is such a neat tool, because it combines lots of different technologies, from computer vision, to transcription, and a whole lot more. So let me start off by handing it over to Zack, who will show you what the tool can do.

ZACK AKIL: Thanks, Dale. So let me show you this demo that I built that uses the Video Intelligence API. What this is, this is a visualizer that shows off all of the public features of the API. I'm going to go over a couple of my favorite features, and then we can look at some practical examples.

So the first feature is label detection. So what this does, this tells you what's in the video and over what time segments it appears in the video. So let's look at this.

So I've got a breakdown of all the things that label detection finds. And let me click on one that's pretty unique. Here is a horse harness. And as we can see here, it's jumped to that part in the video where the horse harness is being showed. So this is a really nice, simple way of extracting information that's in a video, and it's done with label detection.

Something that's similar, but also really cool, is a feature built into the API called object tracking. Now this does something similar to label detection, but it will also tell you where specifically in the frame those things that it detects are. So this is really cool and useful if you want to say, count the things that are appearing in the video. So for example, in traffic auditing systems, if you want to count all the cars and pedestrians that walk through the video, using object detection can do exactly that.

Another feature that I want to highlight is the speech transcription that's built right into the Video Intelligence API. So this will simply transcribe the video, and it will give you a breakdown of when each individual word is said, right down to the millisecond. So you can imagine some pretty cool sort of subtitle applications that can be built using this. And in fact, Dale will talk more about this feature later on.

DALE MARKOWITZ: So Zack just showed you all the different things that the Video Intelligence API can do out of the box, but how do you actually combine these into something useful? Well, I'll show you one example. Recently, I realized that my family had collected hours and hours of family videos that were sitting in Google Drive somewhere that probably contain lots of precious moments of me, like, eating my first birthday cake or something.

But there was just no good way to sort through all of these videos to find those interesting moments. And this is a really perfect use for the Video Intelligence API. Take a data type that's difficult to work with, like video, this cumbersome, sort of unindexed thing, applying machine learning on it, tag using vision, what's going on in the videos, transcribe what people are saying, and then you can take this metadata and build a searchable Video Archive. So let me show you mine.

OK. So here's the archive I built. I uploaded all of my videos to a Google Cloud Storage bucket. And then I connected that storage bucket to a Google Cloud function that calls the Video Intelligence API every time I upload a video, then analyzes it and dumps a bunch of tags into a search engine so that I can search my videos. So if I want to search for my first birthday cake, I can actually search first birthday cake.

And well, there it is. Not just my first birthday cake, but my brother's too. And this works because either it's using computer vision to identify cake-- but also alongside the fact that in these videos, people are talking and narrating, and the transcripts of what they said have also been extracted, so that also makes it searchable. I can find my birthday cake, I can find-- let's say play set. Really, lots of different things with my youth I've been able to discover, thanks to the power of machine learning.

So, one great use for Video Intelligence is make a searchable archive. But what about if you want to not just search videos, but actually edit them and massage them and change them using machine learning? So now I'm going to hand it back to Zack, who's going to talk about video editing with ML.

ZACK AKIL: Yes. Let's look at a demo that I've built that will do some automated video editing using the Video Intelligence API. And the story behind this demo is, I have a friend who's very afraid of snakes. And the idea is, if we could build a automatic video editing pipeline that would automatically remove, say, pictures of snakes that appear throughout a movie, so that they didn't have to turn around quickly or get frightened. So let's look at it.

What we're looking at here is the Google Cloud platform, and more specifically, we're looking at the Cloud storage using Google Cloud platform. And I've created a place to upload videos. So what I'm going to do is I'm going to upload a video. And for this example, rather than it being a video including snakes, it's going to be including swans.

So here is a video. And let's say I have a fear of swans while I'm watching this video. I think OK, this is cool, nice and relaxing plane. Cool banjo.

And then, oh my god, look at that. Scary swan. Don't want that to happen.

So what's going to happen with this pipeline that I built? Well, the first thing that's going to happen is this Cloud Function is going to fire. So any time a new file is added to this Bucket, it will automatically trigger this Cloud Function.

And within this Cloud Function, it has a very small amount of code that simply calls the Video Intelligence API and runs label detection. This is the same feature we showed earlier. It's going to run label detection, and then it's going to output a JSON file with all the labels it detects in that video, into another bucket. So let's look and see if it's already done that.

So we're going to go back to our Cloud storage and we're going to look at the next bucket. And here it is. So this is a JSON file with all the time segments of where it detects all the different things that it can detect in the API. So swans are included, snakes, along with thousands of other labels. So it's a full JSON file for the entire video.

And then, because it's dropped into another Bucket, I can trigger another Cloud Function. And this one uses another API that Google has, and it's called the transcoder API. And what this API is, it is a super powerful video transcoding service that allows you to do things like crop videos, inject overlays, add watermarks, change audio. So anything you could potentially do with the sort of post-processing of a video, you can do with this API at scale.

So I won't look at the code for this, because it's slightly more complicated than the previous one. But what this is going to do is it's going to pick up, that JSON file with all of the labels. It's going to scan through, and try to find the time segments where a swan appears. And then, it's going to inject a full screen overlay that's going to hide the scary swans from our video. And then, it's going to generate a brand new video that's safe to watch.

So let's look at the final Bucket where it should output our new, safe-to-watch video. And hopefully we should see-- yes, we have a new video that was just created. We'll have a look at this.

So I'm watching this video again, with my fear of swans. I see a nice, relaxing plane. OK, all seems normal. And then, where there was a swan, it's now been completely hidden from view. And this was all done in real time, automatically, using the Video Intelligence API and the transcoder API, all tied together with Cloud Functions and Storage.

So this could all operate at scale. I can upload as many videos as I want to this, and it will do all of this in parallel. So yeah, that is the application. And I'll hand it over to Dale to show you what she's made. Thank you.

DALE MARKOWITZ: So Zack just showed you an example of how you cannot just search through videos, but actually use the Video Intelligence API feature that it tells you at what point in time things are going on in the video. To actually do this sort of automated editing-- like if I see a swan at this point, I'm going to somehow jump over the video.

We can begin to build even more sophisticated apps if we combine Google Cloud Machine Learning tools. So what happens if we combine the transcription feature of the Video Intelligence API, with the Google Translation API, with the Google text-to-speech API? Which is a tool that takes in a string of text and produces human-like speech.

If you think about these tools together in your mind, imagine taking a YouTube video, using the Video Intelligence API to extract the transcripts of what people are saying and when, then translate the output, and then use text-to-speech to speak the translated output in a different language. You can think about this altogether as being sort of like automatic-- automated subbing-- subtitles. And then, automatic dubbing.

I decided to tie these APIs together, using an architecture that was really similar to Zack's actually, to translate some of my own YouTube videos into languages I didn't speak. I'll let you decide whether you think that was successful.

[VIDEO PLAYBACK]

[SPEAKING FINNISH]

- I have to make a New Year's resolution to eat less treats and play more sports.

[SPEAKING SPANISH]

- There have been companies, non-profit organizations, and educational institutes--

- So I always thought it would make sense to stick something simpler, like [INAUDIBLE] engineering.

[SPEAKING RUSSIAN]

[END PLAYBACK]

DALE MARKOWITZ: So there you go. Here's a bunch of ways that you can tie together machine learning tools. By the way, no data science or ML expertise or background required. This stuff is all super developer friendly. I hope you enjoyed these demos, and I'll leave it to our next speaker.

- I'm [INAUDIBLE], and I'm the Product Lead for serverless on Google Cloud platform. As an early stage company, your most precious resource is time. You probably need to focus your development efforts on where you are most differentiated, not your infrastructure. You need to get to market fast.

Google Cloud Functions, one of our serverless compute offerings, can help you do both. Today, I'll walk you through a demo of Google Cloud Functions, where you can develop quickly and securely by writing and running small code snippets that respond to events. You get pay-per-use, auto scaling, and Google manages your infrastructure so that you can focus on your core business.

Cloud Functions is the glue connecting Cloud Services. On the left side, you can see various events that happen in the world. And on the right side, you can see the services you can call into.

Cloud Functions isn't just about connecting GCP services. You can post to a function endpoint from basically anywhere, as you will see in this demo, and respond by calling another endpoint outside Google Cloud. The demo will walk you through a Cloud Function that performs lightweight ETL on a customer's checkout operation via stripe, an updated dashboard on Google Sheets for your team's business analyst. And, you can use Google Cloud Secrets Manager to securely store your secrets, such as your Stripe API key and your endpoint secret.

In this demo, Rob is the business analyst of a merchandising company, selling different types of products that has access to a revenue dashboard that tracks sales revenue and costs over time. The company has recently expanded sales from the west coast of the United States to the rest of the country. He is looking to spot trends, activities, and gather actionable insights in real time.

Let's look at the building blocks for this operation. First, let's create a Cloud Function from the Google console. We'll call our function, Process Stripe Checkout. Although we're using an unauthenticated function, you'll see later how we can still verify that Stripe initiated the request.

Minimum instances is a new feature that can dramatically improve the performance of your application, minimizing your cold starts. By specifying a minimum number of instances of your application to keep online during periods of low demand, you can eliminate startup latency. For this demo, let's keep one instance warm.

Videos are a runtime environment variable for the Google Sheets ID, so that we don't have to hard code it in our source code. We'll also use Secrets Manager for storing secrets. We have two secrets for this demo, the Stripe API key, and the webhook secret.

We're going to choose latest for the webhook secret. This means that once we get the secret from Stripe, we can update it in Secrets Manager without having to redeploy our function. Next, you can specify the runtime. For this demo, we'll use Node.js 14.

And let's update the entry point for the name of the event handler that will execute whenever this Cloud Function is invoked. These first few lines are where we pull in the Stripe Keys from Secrets Manager and the Google Sheets ID from the runtime environment variable. Because our Cloud Function is unauthenticated, we'll verify that the request came from Stripe.

Since our custom dashboard requires more data than what Stripe provides by default, or as a part of Getting-- of the Getting Started example, we'll make an additional call out to Stripe. And now we can perform some lightweight ETL. For example, grab just the salient info from the checkout object, and format it as appropriate. And lastly, insert the new row into our Google Sheet.

Now let's deploy our function. Now that the function is ready, let's copy the endpoint URL for Stripe to call. Now, we'll switch over to the Stripe dashboard to create the webhook that will invoke our Cloud Function whenever a checkout occurs. Let's add our functions endpoint and configure this webhook to fire when a successful checkout has completed event occurs.

Lastly, we'll copy the secret to verify that the request came from Stripe. Back in Secrets Manager in Google Cloud, we will create a new version for the webhook secret. We'll use Latest, so we do not have to redeploy our function or make any code changes.

Now, let's see this in action. A customer, Jane in Tennessee, is excited that the company has opened sales in her area. She visits the website. She makes a quick car purchase transacting online, using her credit card.

Rob, the business analyst in Google Sheets, can now see it update in real time, because minimum instances in Cloud Functions was set to 1. There is already a warm instance ready to process the request. Rob sees a small blue dot in Tennessee to indicate the company's first sale there. And he also sees the revenue per product versus units changed. And lastly, there is also a spike in revenue by date.

As more sales from across the United States come in, Rob can see the Google Sheet dashboard update in real time. He confidently reports to his leadership that the planned expansion is a success. And indeed, it is.

Thank you. I hope you found this demo helpful. And I look forward to having you try out the new features we released with Google Cloud Functions to accelerate the time to value from applications you create and manage. Just write your code snippets, and let Google Cloud take care of the rest for you. Thank you.

DAVE ELLIOT: OK, very cool. Very cool set of demos, demonstrating how you can quickly get to market-- build some-- build some demos, build some products. Really, really neat.

So, we have the speakers here today. So thank you, Andrea, Vidiya, Zack, and Dale. I'm just going to go ahead.

We have about 20 minutes left, I want to jump in and ask questions so you can share some insights. We are live, by the way, for the Disrupt audience. So if we flub things, you know that-- that it's real. It's real, real action here.

All right, so let me start with you, Andrea. So BlueZoo-- very cool, very neat idea. My question as I look at this-- and I know I've worked with your company a little bit-- is why now? Why telematics for building, why occupancy measurement, why is that-- why is that possible now?

ANDREA LE VOT: Well thank you, Dave. Well, there are principally three things that I would like to highlight. So first of all is that telematics are automotive. They have proven that people want data, they want to-- to base their premiums accordingly-- according to the data.

So on the one hand, the insurers want to really know the [INAUDIBLE]. And on the other hand, you also have the people who are insured who want to pay for what they are-- what they are really using, and what they are really doing. So that's one thing that all comes together with the-- I think, with the will of a lot of people to make informed decisions. So to really base everything on the usage, and it also goes together with the-- hand-in-hand with the digitalization of the society. That's the first one.

So then there is a second point. That I would say that when we talk to insurers, well they said us, we tried a lot, we tried spyware, we tried this. But this is not giving us the good information that we are really looking for.

Already, spyware will-- you know, these little hidden things that are in the apps that know where you live, that know where you work, and everything. So there is less and less acceptance for this. And it's also going-- just going down, because well, Android and iOS, they are killing it with the new pop-ups that come. Are you really OK that we share your data with everybody? So less and less people are OK with it, and also less and less companies are OK with that.

DAVE ELLIOT: That's right. And I think I forgot when I introduced you, you are actually the Chief Data Protection Officer for blues BlueZoo.

ANDREA LE VOT: Yes, exactly. Exactly. A lot of our clients, they really-- and of the people of the insurance we are talking to, they, for them, it's very important to us to keep this privacy, to really protect consumer privacy. And they don't need to know all this information, they need to know it at the location, at the building they are insuring.

DAVE ELLIOT: Yep, OK.

ANDREA LE VOT: So that's the second point. And then there is a third point, that is, what technology makes it possible now. So before, we did not have these TPUs, this really high [INAUDIBLE] TPUs, that allow us to process the data actually, really on the device and to employ these machine learning models that we are now using for our upcoming optical sensors here.

DAVE ELLIOT: Sure, sure. So market and market need and then this push on privacy, and then the technology to be able to do things like inference at the edge. That's actually a good segue to Zack and Dale.

Let me ask you-- the next question-- to you, Dale. So, when do you need, if you're doing this-- we talked about inference at the edge-- when do you need a data science team? Or, when can you just use your own software development team? Like, what's that-- where's that fine line?

DALE MARKOWITZ: Yeah, that's a great question. As someone who does a little bit of both data scientists-- science and software development, I really feel like most of the time when I'm building these apps, I'm really having my software developer hat on. And to me, it's sort of a lot like the way that you would decide, as a software engineer, whether or not to build to use a software library.

Like for example, let's say I wanted to implement a server. That would be very complicated and require a lot of knowledge about networking. But of course,

[INTERPOSING VOICES]

DAVE ELLIOT: Yeah exactly.

DALE MARKOWITZ: You just went off the shelves. So I think of, when you're trying to do an application that's, like, so common that it's commoditized, like, something like translation, and it always makes sense to-- there's probably someone that implements it better than you ever could, unless you had a lot of budget and a lot of TPUs. So in that case, I would say, use off the shelf service, and when you want to do something really niche and custom, then maybe you look into the data scientist path.

DAVE ELLIOT: That makes sense. That makes sense. So, don't reinvent the wheel.

So, I guess that's a good question to maybe throw over to Zack. So Zack, when you built these demo, the really cool demo, really love the snake you know, metaphor with the swans. There might be some people who are terrified of birds, so that actually could just work right there.

When you built that, what's the thing that's most overlooked by software developers when you build these types of demos? Now, what's the really complicated or difficult or maybe the easiest thing that's most overlooked?

ZACK AKIL: It's funny you ask the question, talking about that demo, because I see myself-- my experience is in full stack development, and then I got into machine learning and started building these end to end demos. And always-- what's kind of fascinating with machine learning being the hot thing to do now, it's cutting edge technology, it still comes down to just front ends takes so long to build. And that's why the SWAN demo-- that has no front end to it.

So building that end to end actually only takes, like, 20 minutes. And if you have the code to copy and paste in it, it takes like 5 or 10 minutes to set up the services. But the Visualizer app, that was like a week or two of development to get the front end. So it still comes down to the user interaction element and the front end element. And that's where I start to build more demos that kind of have prebuilt front end, so using things like Apps Script that combine with like G Suite, where you can basically use Excel sheets and spreadsheets to have the front end for you. So it all comes down to front end taking the longest in these demos.

DALE MARKOWITZ: You can give a nod to Vidya for making the back end easy for you with Cloud Functions.

DAVE ELLIOT: That's actually a great segue to you Vidya So let's talk about Cloud Functions. I mean, what are some of the top use cases for Functions. I mean, I can think of a lot, but what are some of the actual real-world use cases where customers are seeing value here?

VIDYA NAGARAJAN: Yeah, absolutely. So we here are developers in customers leverage Cloud Functions for a variety of use cases. And since I love to talk about top 5 and top 10 lists, these are my top 5 popular use cases that I keep hearing about.

One is, as you saw in the demo, connecting and extending the cloud using an event driven architecture is one of the most popular use cases. So you can extend this not just for blue code use cases, with third party services, and webhooks, but also with GCP Services or our own custom services. The second popular used case we keep hearing about is asynchronous large data processing pipelines or inline data transformation. So example is a roadside camera captures the images, and then those images are processed to extract the license plate numbers. That's a popular example.

The third example, or used case, is related to CloudOps automation. So here it is about programmatically provisioning, modifying, deleting cloud resources. And these operations can either be scheduled or they can be event driven.

And the fourth use case that comes up is related to business workflows. So any kind of managed execution of business processes. Example is a field technician who uploads a code for a job and he uploads it from his mobile device, and that in turn triggers a set of warehouse inventory checks that get triggered.

And the last, but not the least popular use case is about implementing APIs serving front end and back end systems. And this is, like, very common, right? A visitor goes to a website, you have a chat bot, and the chat bot then responds, right? And so a lot of these are actually powered by serverless applications. And, of course, I'm hoping that as our viewers continue to see these cool demos, I will also learn from them about more and more popular use cases of how they leverage Cloud Functions in building their own business applications as well.

DAVE ELLIOT: Yeah, that's cool. That's cool. You almost can't show a demo now on Google Cloud without ending up using it in something, just because it can do so much. So it's so flexible.

So, let's see. That's a good question. A good transition, maybe over-- back to BlueZoo, because Vidya mentioned the ability to create a pipeline to look at license plates of cars, for instance, as an example.

What do we think-- with your innovations in and around computer vision and edge inference and anonymously capturing and measuring people, what does BlueZoo see as a future in and around this space? What's coming 2 to 5, 7, 10 years down the road? People love hearing about the future, especially at a conference like this.

ANDREA LE VOT: So, I mean, what are the future-- In fact, what are the future or what's the future of BlueZoo in that?

DAVE ELLIOT: Yeah, yeah.

ANDREA LE VOT: So, Yeah. So for us it's-- well, first of all, we are growing fast. And with Google Cloud tools they allow us to scale. So for us that's very important.

DAVE ELLIOT: Oh, I can hear that.

ANDREA LE VOT: Yes. The future is bright. So we want to scale, and for that we need powerful tools. And also we have this new machine learning tools, we have this new current CPUs that we are using for upcoming optical sensors.

So that was just not possible before. And this allows us to create better products. In fact, so we started with Wi-Fi, but now we are going on with our new optical sensors that use machine learning for two different features. So there's one that we use Vertex AI to train our machines to train the models.

And another thing is that it also enables us to really process at the edge. And that, it comes back, so, you know, data privacy, for data protection also. So there it comes back to, we don't have to upload any private data or any private information. So we can really process everything on the edge and in all respect of privacy. So that's very important.

And apart from this, sensor path. So when we look a little bit further in the future-- so at the moment we use it for where-- we use it more and more for the processing and really the execution of our models. And in the future, we want to go further and to use also machine learning to help our clients too in analytics so that they can-- we have these dashboards that we saw before based on Looker. It really allows us to give risk and analysis, and in other clients the possibility to visualize and to extract important data, really in a few clicks.

And we believe that in the future, we can do this just as [INAUDIBLE]. So we can take information, and we can do part of this analytics automatically. And like this, better serve for our clients. And that'll be really fun.

DAVE ELLIOT: I really like the messaging on your website where you say-- you measure, which is really kind of the first step, which is what you're doing today. But you measure, then you can predict, and then prevent--

ANDREA LE VOT: Prevents, yes. Exactly.

DAVE ELLIOT: --risks. So measure, predict, prevent. And that's something we've seen in a lot of areas. If you can't measure, you're obviously not going to ultimately be able to take the action to do whatever it is, prevent injuries or overcrowding or whatever it is. Measure, predict, and prevent. And I think that's a really neat-- a really neat future.

To transition it a little bit, so if you're a developer out there watching this and you think this is great, what's some advice on how to get started? Like, what's the-- let's go to Vidya. Let's go to you. What are some best practices for a developer out there who's thinking, OK, this sounds great. Where do I go? What do I do? How do I get started?

VIDYA NAGARAJAN: Yeah, absolutely. And, again, going by my tradition of maintaining top 5 or top 10 lists. So I have--

DAVE ELLIOT: We gotta get to get your top five movies after this.

VIDYA NAGARAJAN: So here are my 10 recommended best practices that developers would like to consider while creating, as well as managing, your business workloads-- serverless workloads. Number one is focus on the application business logic. So function, scale, seamlessly with incoming traffic. You don't have to worry about managing infrastructure. That's taken care of for you. All logs go into one central location in the Cloud Logging, and from there you can take care of Cloud monitoring, as well as air reporting.

Number two, keep your functions small. Really follow the principle of single responsibility for functions. A serverless function should perform a specific function. And functions are really suited for blue code use cases as we saw in the demo. And think about sprinkles of code snippets of code that process a single file, pops up message, or even perform a CloudOps activity like labeling a virtual machine.

Number three, use event streams to manage your interactions between microservices so that you can actually reduce complexity. So products like Eventarc on Google Cloud Platform actually provide that central hub of events, which take care of asynchronous and reliable delivery of events. Number four, use workflows on Google Cloud Platform, which actually combine the power of Google Cloud APIs, Cloud Functions, and more so that you can actually control the order in which you combine these different services and create very powerful, flexible applications that are stateful, durable, observable, and long running.

Number five, leverage GCPs client libraries. To focus on what you are building, instead of hooking up services. For example, BigQuery, GCS, pops up, they all come with your own native client libraries. So you don't have to use REST APIs to go then hook up and build all this yourself.

The next one that is very dear to my heart is about using the principle of least privilege. So by providing fine-grained access on who can invoke or edit a function, or what type of resources your function has access to, you can truly control the execution, as well as development of your function. And next, set up a streamlined development model with your CICD practices or pipeline. So start with developing Cloud Functions in frameworks like the Function Framework locally before you deploy to the cloud. We provide integrations with Terraform, Firebase, serverless.com.

You can also take advantage of the Cloud Functions API so that you can build your own CICD pipelines, keeping it all in mind. And lastly, I want to talk about focus on monitoring your services and not servers. So focus on your application errors and latency. We have a lot of third party integrations with other industry leading monitoring vendors such as Neuralink, Splunk, Datadog. So you can actually leverage your existing monitoring services to manage serverless as well.

And lastly, take advantage of a lot of the new functions-- features that have been released with [INAUDIBLE] and integrations with Secrets Manager because this can truly help you manage your latency sensitive workloads as you desire. So that was a long list, Dave, but I thought those were my top items.

DAVE ELLIOT: Obviously you've prepared for that. That's one of those questions that clearly you were prepared for. So that was a lot-- that was a lot of information. Have you done a blog on this, or is there a how to guide for people who are watching this frantically. If it were me, I'd be frantically taking notes. Is there a location, maybe we can deliver that content?

VIDYA NAGARAJAN: Yes. We do have our Help Center and where we actually talk about best practices more at the engineering level, at the developer level, which goes deep into what specific practices you need to follow at the code level. But I think it's a good idea. We're working on figuring out how can we actually package all this together so that we can actually share this as large or best practices. And that's probably what I'll work with you on, Dave.

DAVE ELLIOT: Yeah, that sounds good. You should work with some folks in Developer Relations. That'd be awesome. So actually-- so I'm gonna turn this to Dale and ask for her top 10 list. Um, I'm making this up. No. Yeah.

DALE MARKOWITZ: [INAUDIBLE] have a list prepared.

DAVE ELLIOT: Yeah. What's your top 10 movies of the year? No. Dale, what should-- related to that, what should someone look out for when you're using machine learning that you might not think of if you're used to building traditional software?

DALE MARKOWITZ: I think there's sort of, like, a nice symmetry to this question because what I think about what I've been considering and paying attention to as a software developer, like, it used to be a lot of the things that thanks to Vidya and serverless architectures, I don't think of it anymore. Like, how many servers do I have, and what's the cost of scaling them up and down? So serverless takes away a lot of the things I used to be concerned about, but now machine learning introduces a whole new set of problems that developers have to think through slightly differently.

And the main way is that when you write typical code, as a developer, if you haven't made a mistake and introduced any bugs, it should always work 100% of time. But even if you have an extremely accurate machine learning model, it's just not going to be perfect, fundamentally. For example, translations are never going to be 100% accurate transcriptions. If you talk to your Google Home you know it's not 100% accurate. So you as the developer have to understand that that's gonna happen and have to design your applications so that's OK. So allow users to fix errors in transcription or translation, and then iteratively improve.

DAVE ELLIOT: Makes sense. That makes sense. So that's number one, let's hear your other nine. No, I'm joking. Maybe-- maybe not.

OK, good. Well, we are running out of time. Let's see. I'll direct one more back to you, Zack.

Zack, so when people are learning these APIs, these machine learning services, where should they get started? This is a similar question to what I asked Vidya before. Where should they get started? How do they get started? What's one of the first steps?

ZACK AKIL: First steps is-- because I like to specialize in video and vision applications. So whichever area of the machine learning domain you might be into, whether it's natural language or video and vision or tabular data, I would say go out and discover the tools and spend 10 minutes just figuring out what they're capable of. I think we're reaching this stage now where you don't necessarily need to know fully how they work. We can trust our developer engineer minds that if we want to spend a day trying to get something to work, we can get it to work.

But just knowing what it can do is more important than knowing how to use it. That's my kind of approach now. Like, once you know what the tool is capable of, that's the most useful bit of knowledge you can have as a developer, especially because there's so many very specific and unique tools that do different things. Learning how to use all of them is going to be impossible. Just learning what they're capable of is the most useful bit of knowledge.

So hunting out the documentation, like, the one pager of different APIs, I would highly recommend people look at the Google Cloud Vision API documentation because built into that documentation there's a live demo that you can just drag and drop any image into, and you can see all the data you get back from it. And in fact, I use that live demo constantly for prototyping. I'll say, could this thing solve my application problem? I'll drag an image into it and just see what it spits out.

For example, recently I was building something to do with art in galleries, gallery analysis, so given a photo of a gallery discover what type of art is in it. And I built a model and it looked pretty cool, and I was, like, oh, I wonder if the Vision API can do anything for me in this? And I dragged and dropped the image of a gallery into it, and it was able to do object detection and detect all the pieces of art just with the Vision API out of the box. So I was, like, oh, that could be-- that's insanely useful. I could train any model.

I can extract out the art nice and cleanly out of the image just using this API. And I wouldn't have known it if I hadn't just throw my data into it.

DAVE ELLIOT: It's funny because that's a little bit of a twist on-- what I always say is, you know, put fingers on keyboard, because you could read about things for a long period of time, but until you actually go and use it, put fingers on keyboard, then you can start to really understand what it can and can't do. So it's a little bit of a twist on what I normally say in that you're saying, hey, make sure you understand the capabilities. There's an analyst I used to work with years ago who said, you know, it's a simple matter of programming. And everything is a simple matter of programming.

You could recreate, you could do anything. It's just a matter of how much time you have and how many resources you put against it. Simple matter of programming. That's good. And we are just about out of time. I'm gonna give everybody about 30 seconds to wrap up. So why don't you go ahead and go first, Vidya.

VIDYA NAGARAJAN: No, I think-- thank you so much. This was an amazing session to actually connect with our viewers. And thank you, Dave, for a really wonderful moderate-- being a great moderator.

DAVE ELLIOT: That's fine.

VIDYA NAGARAJAN: And I'm really hoping our viewers will get to take advantage of some of the tools that you actually got to see in the session today. And we'd love to hear from you. And thank you.

DAVE ELLIOT: Awesome, awesome. How about you, Andrea? How would you like to close it out? Any last thoughts for folks who were watching?

ANDREA LE VOT: Well, I'm just amazed about all the new possibilities that we have. And I think that we are just at the beginning, and we will have a lot of more fun with all these new tools. And I'm really looking forward to this. As a data lover, it's just perfect.

DAVE ELLIOT: Cool. Cool. How about you, Zack? Anything for the viewers?

ZACK AKIL: Yeah. Be sure to check out the Vision API docs to get a live demo. And also, I've got loads of demos, including the one I showed in my video on my GitHub.

DAVE ELLIOT: Yeah, what's your GitHub?

ZACK AKIL: /ZackAkil.

DAVE ELLIOT: Cool. Cool. And then, Dale, on AI. Do you have any closing thoughts?

DALE MARKOWITZ: Like and subscribe. Just kidding. This isn't YouTube. My feedback is that all of these tools as a developer, they make feeling this stuff a lot more fun. So if you had a bad experience in the past, a lot of the friction has been removed. It's gonna be a lot more fun to develop.

DAVE ELLIOT: Very neat. Very neat. Hence, the applied portion of your title, Applied AI. Very neat. All right, great.

Well, Thanks so much for attending. Hopefully you learned a little bit of something and maybe had a little bit of fun seeing some neat demos, and look forward to hearing back from folks. I did want to direct people to our virtual booth, where we will have folks, staffing, answering questions and interacting directly. Thanks, everyone.