Today on the podcast, Jon Foust is back with Mark Mirchandani as we talk about SLOs and the importance of measuring service reliability with Alex Bramley. As a member of the Google SRE team, Alex and his coworkers help customers optimally run their services on Google Cloud. They collaborate with the client, weighing client needs and user needs to develop a plan that is affordable, efficient, and has the highest reliability for the user. Recently, they’ve been working to automate functions such as detection of outages, so that Google and the customer can work together quickly to get everything working smoothly again.
Later, Alex, describes the steps developers go through at his workshop, The Art of SLOs, which was designed to help companies measure and improve reliability. At this workshop, attendees are encouraged to set SLO targets and error budgets. They are given theoretical reliability problems to solve, allowing them to practice without the added pressure of messy, real-world problems. The Art of SLOs helps developers understand what measurements are beneficial and why and the best way to implement projects that can take those measurements accurately. Alex was able to make the materials for the workshop free online!
It’s all about data management this week on the podcast as Brian Dorsey and Mark Mirchandani talk to Google Cloud Product Marketing Manager, Amy Krishnamohan. Amy starts the show by explaining that Cloud SQL is a fully managed relational database service that recently added Microsoft SQL Server to its repertoire. We talk about SQL Server’s migration from 2008R2 to a newer version, the process involved, and how it’s effecting customers. Luckily, Cloud SQL for SQL Server is very backwards compatible, making the process easy for Google Cloud customers! Cloud SQL also offers other tools to make using Microsoft SQL Server easier with Google Cloud, including shortcuts to set up the high availability function.
Amy talks later in the show about what companies are a good fit for Microsoft SQL Servers on Google Cloud. She explains the steps to set up and tear down, how licensing works, and what the best use cases are for Microsoft SQL Servers on Google Cloud. In the future, Cloud SQL will have a managed AD service available.
A multi-cloud strategy is important, according to Amy. It is up to each company to research cloud services and pick the best vendors and products for themselves and their clients. Cloud SQL for SQL Server is a way to bring two great products together for the benefit of consumers.
Priyanka Vergadia joins Mark Mirchandani today to talk shop with Travis DePuy about all things digital services. Travis is a product evangelist for xMatters, a company that provides digital services for clients in a way that makes it easy for them to “limit the blast radius” as they build and use their projects. At xMatters, customers can build an incident management workflow for their custom services and integrate the tools of their choice. Travis talks about service degradation and how xMatters helps clients optimize and manage their services to control instances of degradation. With programs like Google Stackdriver, xMatters can set limits and get alerts when thresholds are met, then use that information to fix performance.
Later in the show, Travis talks about moving a large enterprise like xMatters to the cloud.
Emily Cai of Google is on the podcast today with hosts Brian Dorsey and Mark Mirchandani to talk about Kubernetes Config Connector, which went GA last month. The program helps users manage their Google Cloud resources in a way that is familiar for Kubernetes developers. Emily explains that it’s a great tool for Kubernetes developers looking to easily manage their infrastructure in one place. A platform team managing other teams is a perfect example of large-scale companies who could benefit from this tool, Emily explains.
Walking listeners through the development cycle before and after Kubernetes Config Connector, Emily shines some light on specific instances when this powerful tool could streamline the process of building your project, making it faster and more efficient. She elaborates on the ways Config Connector and Anthos can work together as well.
In the future, the Config Connector team hopes to cover all GCP resources, to create a more clear end-to-end experience for Kubernetes developers, and to allow Config Connector to be enabled straight onto a cluster.
Jon Foust and Mark Mirchandani are joined today by Domile Janenaite and Chris Stephenson of Humanitec. Humanitec, a German startup, helps developers run their code easily and smoothly in various environments. Chris and Domile start off by explaining why Humanitec was founded and what sets it apart from competitors, especially in the way it streamlines devops integration.
Later, we learn how Humanitec is helping developers get the most out of cloud development by not only easily running deployments but also aiding in environment management. Developers can spend more time writing code and less time worrying about how they’ll get it to run. Chris also expands on how they built Humanitec, the reasoning behind their development decisions, and the challenges they faced. Domile goes on to describe the types of teams and companies that Humanitec is best suited for and why.
Aja Hammerly and Brian Dorsey are here this week to start off a new year of podcasts! In an interview with Google Developer Advocate Katie McLaughlin, we talk about the advantages of Python 3 and why version 2 has been retired, as well as the cool things you can do with Django.
Later, Katie discusses the complexities of deployment and how she makes it work smoothly with GCP, and we have some fun with emojis!
Hosts new and old gather together for this special episode of the podcast! We’ll talk about our favorite episodes of the year, the coolest things from 2019, and wrap up another great year together doing what we love! Happy Holidays to all of our listeners, and we’ll see you in the new year!
Gabi Ferrara and Jon Foust are joined today by fellow Googler Zack Akil to discuss machine learning and AI advances at Google. First up, Zack explains some of the ways AutoML Vision and Video can be used to make life easier. One example is how Google Photos are automatically tagged, allowing them to be searchable thanks to AutoML. Developers can also train their own AutoML to detect specific scenarios, such as laughing in a video.
We also talk Cloud Next 2019 and learn how Zack comes up with ideas for his cool demos. His goal is to inspire people to incorporate machine learning into their projects, so he tries to combine hardware and exciting technology to think of fun, creative ways developers can use ML. Recently, he made a smart AI bicycle that alerts riders of possible danger behind them through a system of lights and a project to track and photograph balls as they fly through the air after being kicked.
To wrap it all up, Zack tells us about some cool projects he’s heard people use AutoML for (like bleeping out tv show spoilers in online videos!) and the future of the software.
Happy Thanksgiving! This week, Aja and Brian are talking DevOps with Nathen Harvey and Jez Humble. Our guests thoroughly explain what DevOps is and why it’s important. DevOps purposely has no official definition but can be thought of as a community of practice that aims to make large-scale systems reliable and secure. It’s also a way to get developers and operations to work together to focus on the needs of the customer.
Nathen later tells us all about DevOpsDays, a series of locally organized conferences occurring in cities around the world. The main goal is to bring a cross-functional group of people together to talk about how they can improve IT, DevOps, business strategy, and consider cultural changes the organization might benefit from. DevOpsDays supports this by only planning content for half the conference, then turning over the other half to attendees via Open Spaces. At this time, conference-goers are welcome to propose a topic and start a conversation.
Jez then describes the Accelerate State of DevOps Report, how it came to be, and why it’s so useful. It includes items like building security into the software, testing continuously, ideal management practices, product development practices, and more. With the help of the DevOps Quick Check, you can discover the places your company could use some help and then refer back to the report for suggestions of improvements in those areas.
Mark Mirchandani hosts solo today but is later joined by fellow Googler and Developer Advocate Ray Tsang to talk Java! Ray tells us what’s new with Java 11, including more memory and fewer restrictions for developers. One of the greatest things for Ray is using Java 11 in App Engine because of the management support that it provides.
Later, we talk about Spring Boot on GCP. Ray explains the many benefits of using this framework. Developers can get their projects started much more quickly, for example, and with Spring Cloud GCP, it’s easy to integrate GCP services like Spanner and run your project in the cloud. For users looking to containerize their Java projects, JIB can help you do this without having to write a Dockerfile.
At the end of the show, Ray and Mark pull it all together by explaining how Spring Boot, Cloud Code, Skaffold, and proper dev-ops can work together for a seamless Java project.
Jon and Aja host our guest Donna Malayeri this week to learn all about Cloud Run and Anthos! Designed to provide serverless containers, Cloud Run has two versions: fully managed and Cloud Run for Anthos.
Donna’s passion for serverless projects and containers shows as we discuss how these options benefit developers and customers. With containers, developers are able to go serverless without a lot of the typical restrictions, and because they are a standard format, containers are fairly easy to learn to use. Tools such as Ko can even do the work of generating docker containers for you. One of Cloud Run’s most unique features is that it allows developers to bring existing applications. You don’t have to rewrite your entire app to make it serverless! Developers can also reuse instances, making the process more efficient and cost effective.
Cloud Run for Anthos allows projects to stay on-prem while still enjoying the benefits of containers and the Cloud Run platform.
Later in the show, Donna tells us about Knative, which is the API Cloud Run is based on that helps create portability between Cloud Run versions, as well as portability to other vendors. We also get to hear the weirdest things she’s seen put in a container and run in Cloud Run!
This week, Mark and Jon bring us a fascinating interview with Kami May of Supersolid, a gaming company in London. With the help of Kami May, Supersolid recently launched their first multiplayer game, Snake Rivals. This session-based game puts players in an arena where they can choose from three modes: endless, gold rush, or battle royale.
To produce the game, Supersolid makes use of many GCP products. Snake Rivals is powered by Kubernetes and Agones, which Kami chose because it offers functionality that works well with gaming. It provides server allocation which allows players to continue play even during an update, has the ability to scale, allows labeling, allows for different gaming modes, and more. To reduce latency, Supersolid operates in nine regions. Supersolid uses BigQuery and continuously gathers data so they can make adjustments to make sure game play is efficient, fun, and functional. Kami explains that navigating the world of multiplayer gaming for the first time was tricky, but the Google support team has been very helpful!
Happy Halloween! Today, Jon Foust and Brian Dorsey chat with Maria Laura Scuri of FACEIT about ways they are reducing toxicity in gaming. FACEIT is a competitive gaming platform that helps connect gamers and game competition and tournament organizers. In order to do this well, FACEIT has put a lot of energy into finding ways to keep the experience positive for everyone.
Because gaming toxicity can involve anything from verbal jabs to throwing a game, FACEIT uses a combination of data collecting programs and input from players to help identify toxic behavior. In identifying this behavior, FACEIT has to consider not only the literal words spoken or actions made, but the context around them. Is that player being rude to strangers or is he egging on a friend? The answer to this question could change the behavior from unacceptable to friendly banter. Using their own machine learning model, interactions are then given a score to determine how toxic the player was in that match.
The toxicity scores along with their program, Minerva, determine if any bans should be put on a player. FACEIT focuses on punishing player behavior, rather than the player themselves, in an effort to help players learn from the experience and change the way they interact with others in the future.
Maria’s advice to other companies looking to help reduce toxicity on their platforms is to know the context of the toxic event. Know how toxicity can express itself on your platform and find ways to deal with all of them. She also suggests tackling the issues of toxicity in small portions and celebrating the small wins! Her final piece of advice is to focus on criticizing the behavior of the user rather than attacking them personally.
We’re sad to say goodbye to Mark Mandel this week but excited to bring you an interview he and guest host Robert Martin did with Björn Lindberg of Massive Entertainment. The gaming studio is located in Sweden and owned by Ubisoft. Their most recent game, The Division 2, is a “looter shooter” game that was released in March. It can be played solo or users can be matched up to play with or against others.
To keep the game running smoothly, Massive employs a micro-service architecture to divide and conquer the trials of creating and running such a large, intense game. The Division 2 was launched with Google Cloud, a process Björn says was a bit easier than launching on physical hardware. Autoscaling in the cloud has created a simpler, more trustworthy gaming process as well, and by connecting to data centers in multiple regions, they’re able to decrease latency.
Gabi Ferrara and Jon Foust are back today and joined by fellow Googler Manuel Lima. In this episode, Manuel tells us all about data visualization, what it means, why it’s important, and the best ways to do it effectively.
For Google and its mission, data visualization is especially necessary in faciliatating the accesibility of information. It “makes the invisible visible” because of the way it can decode meaningful data patterns. Working across multiple GCP products, Manuel and his team build advanced visualization models that go beyond graphs and bar charts to things like sophisticated time lines that aid in the progression from data to usable knowledge. They have also created guidelines for things like what kind of graphical language to use, what type of charts users might need, and more. These guidelines, originally used only internally, have now been adjusted and released for use by developers outside Google with the help of the Material.io team.
The guidelines are based around the six data visualazation princples that help users get started. They can be employed to plan and inspire an entire project or to evaluate a specific data visualation chart. Some of the most important principles are to be honest and to lend a helping hand. You can read more in their Medium article, Six Principles for Designing Any Chart.
Today on the podcast, Gabi Ferrara and Jon Foust share a great interview with Laura Ham, Community Solution Engineer at SeMI Technologies. At SeMI Technologies, Laura works with their project Weaviate, an open-source knowledge graph program that allows users to do a contextualized search based on inputted data. However, unlike traditional databases, Weaviate attaches meanings and links within the data.
Laura details what knowledge graphs are and how they can be useful for both small and large projects. Explaining that ontology is the meaning of words, she tells us how Weaviate is able to use this concept to make more specific data entries and links, allowing users to perform better and more informative searches. Weaviate is able to do this with the help of Kubernetes. Later, Laura tells Gabi and Jon the ways Weaviate helps developers and users with thorough documentation, assistance with troubleshooting, and support from solution engineers.
Our guests Matthew Tamsett and Ravi Upreti join Gabi Ferrara and Aja Hammerly to talk about data science and their project, Qubit. Qubit helps web companies by measuring different user experiences, analyzing that information, and using it to improve the website. They also use the collected data along with ML to predict things, such as which products users will prefer, in order to provide a customized website experience.
Matthew talks a little about his time at CERN and his transition from working in academia to industry. It’s actually fairly common for physicists to branch out into data science and high performance computing, Matthew explains. Later, Ravi and Matthew talk GCP shop with us, explaining how they moved Qubit to GCP and why. Using PubSub, BigQuery, and BigQuery ML, they can provide their customers with real-time solutions, which allows for more reactive personalization. Data can be analyzed and updates can be created and pushed much faster with GCP. Autoscaling and cloud management services provided by GCP have given the data scientists at Qubit back their sleep!
Mark Mandel and Jon Foust return this week to host Jesse Houston, CEO of Phoenix Labs. Jesse goes into detail about their online, multiplayer game Dauntless, a hunting action game that brings friends together from every platform to fight giant monsters. Users can even switch platforms, say from Xbox to Playstation, and pick up right where they left off.
Later in the show, Jesse describes the hurdles of building such a huge game and how Phoenix Labs overcame them. Late nights and holiday hours helped them create “no downtime deploys”, so users can continue to play even as the game updates. Because big projects sometimes come with big problems, Jesse also emphasized the importance of developing crisis management skills to help get through tough times. We talk more specifically about what it takes to build and run Dauntless, from GCP products such as GKE, Bigtable, and BigQuery, to tricks with scaling and management. In the future, Dauntless will be available on the Switch, new expansions will be released, and more.
Conversational AI is our topic this week as your hosts Mark Mirchandani and Priyanka Vergadia are joined by Cathy Pearl and Jessica Dene Earley-Cha. Cathy explains what conversation AI is, describing it as people teaching computers to communicate the way humans do, rather than forcing humans to communicate like computers.
Later, we talk best practices in design and development, including how a good conversation design and sample dialogues before building can create a better product. This prep work helps anticipate the ways different users could respond to the same question and how the program should react. In multi-modal programming, planning is also important. Our guests suggest starting with the spoken portions of the design and then planning visual components that would augment the experience. Working together as a team is one of the most important parts of the planning process.
We also talk best use-cases for conversation AI. Does performing this task via voice make the experience better? Does it make the task easier or more accessible? If so, that could be a great application. In the future, the conversation may be a silent communication with the help of MIT’s Alter Ego.
On the podcast this week, we have a great interview with Google Developer Advocate, Dale Markowitz. Aja Hammerly and Jon Foust are your hosts, as we talk about machine learning, its best use cases, and how developers can break into machine learning and data science. Dale talks about natural language processing as well, explaining that it’s basically the intersection of machine learning and text processing. It can be used for anything from aggregating and sorting Twitter posts about your company to sentiment analysis.
For developers looking to enter the machine learning space, Dale suggests starting with non life-threatening applications, such as labeling pictures. Next, consider the possible mistakes the application can make ahead of time to help mitigate issues. To help prevent the introduction of bias into the model, Dale suggests introducing it to as many different types of project-appropriate data sets as possible. It’s also important to continually monitor your model.
Later in the show, we talk Google shop, learning about all the new features in Google Translate and AutoML.
Michelle Casbon is back in the host seat with Mark Mirchandani this week as we talk data science with Devoted Health Director of Data Science, Chris Albon. Chris talks with us about what it takes to be a data scientist at Devoted Health and how Devoted Health and machine learning are advancing the healthcare field. Later, Chris talks about the future of Devoted Health and how they plan to grow. They’re hiring!
At Devoted Health, they emphasize knowledge, supporting a culture of not just machine learning but people learning as well. Questions are encouraged and assumptions are discouraged in a field where a tiny mistake can change the care a person receives. Because of this, their team members not only have a strong data science background, they also learn the specific nuances of the healthcare system in America, combined with knowledge of the legal and privacy regulations in that space.
How did Chris go from Political Science Ph.D. to non-profit data science wizard? Listen in to find out his storied past.
Google’s own Billy Jacobson joins hosts Mark Mandel and Mark Mirchandani this week to dive deeper into Cloud Bigtable. Bigtable is Google’s petabyte scale, fully managed, NoSQL database. Billy elaborates on what projects Bigtable works best with, like time-series data user analytics, and why it’s such a great tool. It offers huge scalability with the benefits of a managed system, and it’s flexible and easily customized so users can turn on and off the pieces they need.
Later, we learn about other programs that are compatible with Bigtable, such as JanusGraph, Open TSDB, and GeoMesa. Bigtable also supports the API for HBase, an open-source project similar to Bigtable. Because of this, it’s easy for HBase users to move to Bigtable, and the Bigtable community has access to many open source libraries. Billy also talks more about the nine clients available, and when customers might want to use Bigtable instead of, or in conjunction with, other Google services such as Spanner and BigQuery.
Mark Mirchandani is back this week with guest host Gabe Weiss to learn about HerdX. Our guests, Ron Hicks and Austin Adams, describe how this idea came about, the mechanics of the system, and how it could change the world of livestock.
HerdX is an environmentally friendly, humane way to improve the system of livestock management and sales. It uses monitoring systems to follow animals as they move about the field, then employs algorithms to identify any problems that may need attention. This allows for treatment of specific animals, rather than mass treatment of both healthy and unhealthy livestock. When pitted against humans, HerdX’s AI system could pinpoint the problem livestock much faster and more accurately than people. Once problem livestock are found, the rancher can use that information to devise and implement a treatment plan. Consumers benefit from HerdX as well, through better quality meat and better transparency of rancher practices. The players in the supply chain are recorded and meat is monitored through the entire process, from farm, to feed lot, to the dinner table. Because bad animals can be removed or cured and the supply chain is run much more efficiently, meat spoilage and food poisoning can be mitigated.
On the show today, we speak with Developer Advocate and fellow Googler, Sherol Chen about machine learning and AI. Jon Foust and Aja Hammerly learn about the history and impact of AI and ML on technology and gaming. What does it mean to be human? What can machines do better than humans, and what can humans do better than machines? These are the large questions that we aim to solve in order to understand and use AI. Sherol goes on to explain the types of deep learning machines can achieve, from neural networks to decision trees.
Sherol also went into depth about the potential social impact of AI as it assists doctors parsing through medical records and plans agricultural endeavors to maximize food production and safety. Sherol also elaborates on the ethical responsibilities we must realize when developing AI projects.
For developers looking to build a new AI project, Sherol outlines the pros and cons of using existing tools like Cloud Speech-to-Text, AutoML and AutoML Tables.
Jon Foust joins Mark Mirchandani this week as we meet up with Alim Karim from NetApp and Technical Director in OCTO Dean Hildebrand of Google. NetApp has been in data management for 20 years, focusing on providing on-prem, high-performance storage solutions for large industry clients. Their recent partnership with Google Cloud has allowed them to expand their services, offering the same great data management and storage in the cloud.
Dean and Alim elaborate on the best uses for NetApp, explaining that lifting and shifting an existing project to the cloud is only one way NetApp can be useful. New projects can be built right in Google Cloud with NetApp as well. Our guests discuss the other pros of the NetApp service, including faster data retrieval, better monitoring, and predictability. We also talk about how NetApp takes customer feedback into consideration to make sure their service is the best it can be for every client. What’s in store for the future of NetApp? Listen in to find out!
The podcast today is all about conversational AI and Dialogflow with our Google guest, Priyanka Vergadia. Priyanka explains to Mark Mirchandani and Brian Dorsey that conversational AI includes anything with a conversational component, such as chatbots, in anything from apps, to websites, to messenger programs. If it uses natural language understanding and processing to help humans and machines communicate, it can be classified as conversational AI. These programs work as translators so humans and computers can chat seamlessly.
We discuss how people interact with conversational AI, maybe without even realizing it. From asking Google Home to set your alarm to getting customer service support at your favorite online store, AI is probably working behind the scenes to help. Priyanka also tells us all about Google’s natural language understanding and processing program, Dialogflow. Designed to simplify the process, Dialogflow allows you to input a simple idea like asking for coffee, and watch as the program automatically includes many of the different ways people would naturally ask for coffee. Coffee would be great right now!
Listen in to find out the best (and worst) use cases and practices for this powerful tool!
Jon Foust and Mark Mirchandani are joined by Adé Mochtar to discuss the IT learning platform, Instruqt and how they create and manage the platform with the help of Google Cloud. Sandeep of Google stops in with the info on the Instruqt arcade games we saw at Google Next ‘19.
Instruqt’s main philosophy is that people learn best by doing, and their courses encourage immersion right off the bat. Developers are asked coding questions and allowed to work in sandbox environments to fully expose them to the subject. Instruqt checks the student’s work as they continue through the program to ensure the material is being properly learned.
But learning should be fun, too! By putting developer challenges on old-style arcade machines, developers can test their coding skills, learn new things, and have fun at the same time. At conferences, this has been a great way to engage their target audience. Google Cloud games were run on the Instruqt platform at Next ‘19, and conference attendees came back day after day to try to get on the high score leaderboard. It was a super fun way to get people using Google Cloud technologies!
Blockchain takes the spotlight as new host Carter Morgan joins veteran Mark Mandel in a fascinating interview with Allen Day. Allen is a developer advocate with Google, specializing in streaming analytics for blockchain, biomedical, and agricultural applications.
This week Allen reveals how blockchain and cryptocurrencies can be applied to a variety of applications like distributed file storage and video services. We also discuss the hype and merits of blockchain + projects that Allen has worked on to analyze cryptocurrency transactions using Google Cloud’s big data platforms. The results may just surprise you.
This week on the podcast, Yuri Litvinovich of Scotiabank was able to join Mark Mirchandani and Michelle Casbon to talk about migration from on-prem and their partnership with Google Cloud. Mark Mandel stops in with some cool things of the week and the question of the week, too!
With Yuri’s help, Scotiabank is working to become a modern financial services technology company. Their transition from working mostly on-prem to working in the cloud was exciting for him as he discovered how much cheaper, faster, and more secure large enterprise projects can be in the public cloud. Three years ago, Scotiabank’s CEO began encouraging this shift to keep the company up-to-date, with funds allocated to moving all their thousands of applications and products to a more efficient system.
To accomplish this, Yuri turned to Kubernetes to make use of containers. Because they are light and homogenous in different environments, the modernization at Scotiabank went much more smoothly with Kubernetes and GKE. They also use a mix of managed systems like BigQuery, Dataflow, and Pub/Sub, as well as made-from-scratch applications that help the Google products to be compatible with Scotiabank’s existing software. Yuri believes this was a key to their success in the migration from on-prem to the cloud.
In the process of migration, Yuri experienced some pushback from developers who were concerned about the move. He encouraged them not to “lift and shift” their projects, but to completely re-build them with cloud dev ops principles in mind. Yuri’s goal was to convince developers that doing this would result in projects that were much easier, cheaper, and more secure in the long run. By outlining the benefits and goals of migration and sharing success stories of other businesses who have transferred to Kubernetes and the cloud, Scotiabank was able to help convince developers of the importance of it. Yuri also encourages trust and cooperation between teams.
Happy Independence Day to our American listeners! Mark Mandel is back today as he and Gabi Ferrara interview Bill Creekbaum of Informatica to learn how they work with Google Cloud for a better big data user experience. Mark Mirchandani is hanging around the studio as well, bringing some cool things of the week and helping with the question of the week!
Informatica provides data managing products that offer complete solutions focusing on metadata management, integration, governance, security, data quality, and discoverability. Bill’s job at Informatica is to ensure these products really take advantage of the strengths of Google Cloud Platform. One such example is a product that allows customers to design in Informatica and push their projects to Cloud Dataproc. Informatica also offers similar capabilities in BigQuery. When moving data from on-prem to the cloud, customers can use Informatica and Google Cloud together for a seamless transition, cost savings, and easier data control.
Together, Informatica and Google Cloud can also facilitate the acquisition of high quality data. To have better, more trustworthy output, data inputed needs to be safe to access, have few or no duplicates and null values, and be complete. To achieve this, developers usually use a combination of the Informatica tools Intelligent Cloud Services, Enterprise Data Catalog, and Big Data Management, and the Google tools BigQuery, Cloud Storage, Analytics, Dataproc, and Pub/Sub.
Bill’s closing advice for companies comes in three parts: take stock of the data you’ve got, set goals, and develop a well-rounded team.
On this episode, our hosts Mark Mirchandani and Gabi Ferrara dive into Google Cloud Platform UX with guest and Google Product Designer Michael Kleinerman. Michael’s path to Product Designer started with “ancient” tech designing with Flash and 3D motion graphics and progressed from there through interaction designer to his place now with Google. His experience has helped him appreciate the many different kinds of designers needed for projects and how they have to work together for a good product.
At Google, Michael’s team builds design systems that create a balance between what Google uses and what the products built on Google use. He adopted Material Design, which offers guidelines for patterns and components of design, to Google Cloud. Material Design spans across multiple devices and screen sizes to help simplify design across devices. When Cloud reached the enterprise space, where components can be more complex, Michael’s team worked to adjust Cloud using Material Design so that features like tables would work correctly.
Accessibility is also a top priority for Cloud and the design team. To begin the process of designing for accessibility, the team finds the top three or so reasons that a user would come to their product and ensures those are accessible to all. The next step is to create easier usability in the second tier features of the product, and then all features beyond. Using a screen reader, they go through the product to see if it’s usable, and really try to make the experience better. The team also makes sure there are a lot of guidance pages as well.
The goal in product design is to make things simple and consistent for everyone.
This week, Jon Foust and Michelle Casbon bring you another fascinating interview from our time at Next! Michelle and special guest Amanda were able to catch up with Paco Nathan of Derwen AI to talk about his experience at Next and learn what Derwen is doing to advance AI.
Paco and Derwen have been working extensively on ways developer relations can be enhanced by machine learning. Along with O’Reilly Media, Derwen just completed three surveys, called ABC (AI, Big Data, and Cloud), to look at the adoption of AI and the cloud around the world. The particular interest in these studies is a comparison between countries who have been using AI, Big Data, and Cloud for years and countries who are just beginning to get involved. One of the most interesting things they learned is how much budget companies are allocating to machine learning projects. They also noticed that more and more large enterprises are moving, at least partially, to the cloud.
One of the challenges Paco noticed was the difference between machine learning projects in testing versus how they act once they go live. Here, developers come across bias, ethical, and safety issues. Good data governance polices can help minimize these problems. Developing good data governance policies is complex, especially with security issues, but it’s an important conversation to have. In the process of computing the survey data, Paco discovered many big companies spend a lot of time with this issue and even employ checklists of requirements before projects can be made live.
In his research, Paco also discovered that about 54% of companies are non-starters. Usually, their problems stem from tech debt and issues with company personnel who do not recognize the need for machine learning. The companies working toward integrating machine learning tend to have issues finding good staff. Berkeley is working to solve this problem by requiring data science classes of all students. But as Paco says, data science is a team sport that works well with a team of people from different disciplines. Paco is an advocate of mentoring, to help the next generation of data scientists learn and grow, and of unbundling corporate decision making to help advance AI.
Amanda, Michelle, and Paco wrap up their discussion with a look toward how to change ML biases. People tend to blame ML for bias outcomes, but models are subject to data we feed in. Humans have to make decisions to work around that by looking at things from a different perspective and taking steps to avoid as much bias as we can. ML and humans can work together to find these biases and help remove them.
Your favorite Marks Mirchandani and Mandel are back hosting this week to touch base with Angela Yu about recent updates in Google Maps. As Angela describes Google Maps at a high level, it is your window into the real world, with coverage of Earth’s land and oceans. Google works hard to keep that information updated with satellite pictures, street view Google vehicles, and even backpacks for hikers to record hard to reach areas.
The Google Maps API makes it easy for developers to use Maps data in their own projects. It can be used for something as simple as showing location to something more complicated, for example showing the user specific things around them to help them make decisions. Game developers can create rich experiences by building real-world gaming situations with Maps and augmented reality. Using the Places API can display parks, government buildings, and other interesting places beyond streets. And the Routes API can expand the user experience by providing directions, tracking drivers in real time, etc. Maps and Google Cloud together work well with BigQuery to search huge amounts of data and visualize them on a map.
In the future, Angela is particularly excited about how ridesharing apps will continue to use Maps and Routes to optimize their businesses. She also looks forward to more augmented reality projects beyond gaming, where data, directions, and more are overlaid on the physical world.
Google Developer Advocate Jen Person talks with Mark Mandel and Mark Mirchandani today about developments in Firebase. Firebase is a suite of products that helps developers build apps. According to Jen, it’s equivalent to the client-side of Google Cloud. Firebase works across platforms, including Android, web, iOS and offers many growth features, setting it apart from other Google products. It helps site and app owners interact with and reach customers with services like notifications, remote configurations to optimize the app, testing, and more.
Cloud Firestore has come out of beta, and it is available both through Firebase and Google Cloud Platform, making it easy for developers to move from one to the other if their needs change.
Recently, the Firebase team has been working to refine their products based on user feedback. Firebase Authentication has been upgraded with the additions of phone authentication, email link authentication, and multiple email actions. They’ve also added a generic authentication option so developers can use any provider they choose.
ML Kit makes machine learning much easier for client apps or on the server. With on-device ML features, users can continue using the app without internet service. Things like face recognition can still be done quickly without a wifi connection. ML Kit is adding new features all the time, including smart reply and translation, image labeling , facial feature detection, etc.
Cloud Functions for Firebase is also out of beta. It includes new features like a crash-litics trigger that can notify you if your site or app crashes and scheduled functions. An emulator is new as well, so you can test without touching your live code.
Jon Foust is back this week, joining Mark Mirchandani for an in-depth look at Stackdriver with fellow Googler, Rory Petty. To start, Product Manager Rory explains that Stackdriver is a full observability solution for Google Cloud (as well as other clouds). We touch on how monitoring, logging, and APM tools allow developers and operators to fully understand how a website is performing. In addition to Monitoring and Logging, the suite of Stackdriver tools also includes Debugger, Trace, and Profiler to help users not only monitor their sites, but to solve problems that occur.
Stackdriver Monitoring and Logging support Google Cloud services out of the box. Users can use Monitoring to set up alerts, so if something goes awry, they are notified immediately and can address the problem. Alerts can also be custom designed to inform developers of things like number of checkouts on your e-commerce site, the amount of time between checkouts, and more. Stackdriver Monitoring allows blackbox monitoring, too, to make sure your service is healthy. The Monitoring dashboard makes it really easy to get started, with a resources section that has pre-made dashboards for developers to use. Developers don’t have to do a lot of configuration out of the box. However, if you need a more customized dashboard, that is also possible in Stackdriver Monitoring.
At Cloud Next earlier this year, Stackdriver announced Service Monitoring in alpha, which shows users a map of their microservices architecture. Public beta will hopefully be later this year. Stackdriver Sandbox, another recent project currently in the alpha stage, gives people an easy way to configure a test Stackdriver environment. This way, developers can play with Stackdriver tools without effecting their websites. Stackdriver Profiler, a great tool to understand the performance of your system, went GA at Cloud Next as well.
Stackdriver’s tools are all meant to work together to help you maintain and perfect development projects on many different cloud services and on-prem.
Today on the podcast, we’re speaking with Chris Aniszczyk about the Linux Foundation and the important work they do to further the advancement of technology through open source initiatives. Mark and Mark are your hosts this week, and they begin by speaking with Chris about what the Linux Foundation is and how it’s unique.
The Linux Foundation, while seeking to support open source projects, sets itself apart by also providing professional services such as marketing, technical writing, legal help, and running events. It acts as a parent foundation for smaller open source foundations like Cloud Native Computing Foundation, Node.js Foundation, and the Automotive Linux Foundation, which strives to bring open source to the automotive industry.
Though typically companies can be leery of working with competitors, The Linux Foundation has been successful bringing companies together to create useful software that benefits everyone. Collaboration can be easier when done through the foundation. Chris also actively reaches out to companies in industries that don’t typically engage in open source practices and encourages them to consider working together to make their industry better. Specifically, Chris works with companies within CNCF and the Open Container Initiative.
Michelle and Mark are together again this week to talk with John Bohannon about AI startup, Primer. His goal is to build systems that continuously read documents and write about what they discover. He discusses his recent work building a self-updating knowledge base and the research his team just published.
Perhaps most interesting is the circuitous path he took to get to Primer. Hear about his adventures along the way to becoming a data scientist specializing in natural language processing. How does a microbiologist who developed a pregnancy test for fish get distracted by Python? What does contemporary dance have to do with establishing AI policy? Join us as he weaves a common thread along his career path: encountering interesting problems and discovering creative ways to solve them.
Mark Mirchandani and Michelle Casbon take over the show this week to discuss AI and the PAIR Guidebook to Human-Centered AI. Mark Mandel pops in on the interview, and Di Dang, Design Advocate at Google, talks about her role in designing and building the guidebook with the intent of helping others create quality AI projects.
Di describes human-centered AI as a practice of not only being conscious of the project being built, but also considering how this AI project will impact us as humans at the end of the day. We influence machine learning so much, both intentionally and unintentionally, and it’s our job to look at the project and results as a whole.
In the guidebook, topics like data bias in machine learning, what design patterns work, how to establish trust with the user, and more are addressed. Di explains that the guidebook is a work in progress that will develop with input from users and advances in technology.
On the podcast today we have a fascinating interview from our time at Cloud Next ‘19! Mark and Jon went in-depth with Andrew Davidson about MongoDB to find out what they do and how they do it.
MongoDB is a document database that stores JSON natively, making it super easy for developers to work with data in a way that’s similar to how they think about building applications. The database is scalable, highly available by default with built-in replication, has an intuitive query language, and can be run anywhere.
MongoDB Atlas is a global database service that runs on Google Cloud; it automates deployment and provisioning, and ongoing operations such as maintenance, upgrades, and scaling with no downtime. Atlas is a declarative model to manage your databases easily, is easy to migrate to, and offers advanced features such as global clusters for low latency read and write access anywhere in the world.
In the future, Andrew sees a world where we think in terms of JSON-style documents instead of just tables. MongoDB can help make that happen.
Ann Wallace and Michael Wallman are here today to teach Aja and Mark about Professional Services Organization (PSO) at Google Cloud. PSO is the “post sales” department, helping clients come up with solutions for security, data migration, AI, ML, and more. Listen in to this episode to learn more about the specifics of the PSO!
Mark Mirchandani is our Mark this week, joining new host Michelle Casbon in a recap of their favorite things at Next! The main story this episode is Cloud Run, and Gabi and Mark met up with Steren Giannini and Ryan Gregg at Cloud Next to learn more about it.
Announced at Next, Cloud Run brings serverless to containers! It offers great options and security, and the client only pays for what they use. With containers, developers can use any language, any library, any software, anything!
Two versions of Cloud Run were released last week. Cloud Run is the fully managed, hosted service for running serverless containers. The second version, Cloud Run GKE, provides a lot of the same benefits, but runs the compute inside your Kubernetes container. It’s easy to move between the two if your needs change as well.
Welcome to day three of Next! More awesome interviews await in this episode, as hosts Mark Mirchandani, Aja Hammerly, Mark Mandel, Jon Foust and their guests explore more of Next.
To start, Dan of Viacom joins Mark and Jon to talk about his job in the TV business and why he loves Istio.
Host-turned-guest Aja and Lauren of the Developer Relations team sat in the booth to talk with the Marks about the developer keynote at Next. Aja and Lauren elaborate on how they work to promote Next and put together content inclusive of all aspects of Google Cloud.
Mark and Mark hear how Yuri from Scotiabank is using Kubernetes to help advance Scotiabank’s latest projects. Anthony from Google joins the conversation, too.
And lastly, we tease you with a short interview with Andrew of MongoDB to speak more on the partnership between MongoDB Atlas and Google Cloud. Andrew will be joining us for a full interview on the podcast later this year!
The podcast celebrates day two of Next as our hosts speak with some more conference attendees. Andre came by to talk with Aja and Jon about his work with Stackdriver IRM and their mission for fewer, shorter, and smaller outages.
We had three hosts in the booth with guest, Anne, who works for the GCP Trust and Security Product Team. Brian, Mark, and Aja find out exactly what Anne does at GCP and how she’s enjoying Next!
Brian and Mark also met up with Mario who came all the way from Munich, Germany. Mario runs the Cloud Community in his hometown, and he shared his thoughts on Anthos and what he’s excited about at Next.
Last but not least, Valentin stopped by to talk with Mark and Jon about Go and the presentation he’s giving at Next on site performance.
We’re at Cloud Next this week with special guests, special hosts, and more! On day one, Gabi and new host Mark Mirchandani were able to speak with Jonathan Cham, Customer Engineer at Google Cloud, about his experiences with Google Next. Ori of the Cloud SQL team shared exciting news about Cloud SQL Server.
Later, Aja was joined by co-host Brian Dorsey who elaborated on his Next talk, as well as his favorite things at Next. They were able to get a quick interview with Matt and Nate about Skuid and what they’re looking forward to at Cloud Next. Jose and Bryan of Onix stopped by as well to talk about their company and their experiences in comedy!
Gabi is back with Mark this week in an interview with Connor Gilbert of StackRox, a Kubernetes security company. StackRox uses Kubernetes and containers to maximize security for customers across the container lifecycle. Connor explains how they monitor your containers through building, deploying, and finally the running of the application, and keep your project secure through all stages. StackRox identifies risks and weak areas, then responds in real time.
Connor’s advice for our listeners is to understand what’s going on with your containers and your application. Look at the data, the specs, and your options and then, if-needed, adjust the defaults to optimize the security of your app.
Today on the podcast, we speak with Ian Buck and Kari Briski of NVIDIA about new updates and achievements in deep learning. Ian begins by telling hosts Jon and Mark about his first project at NVIDIA, CUDA, and how it has helped expand and pave the way for future projects in super computing, AI, and gaming. CUDA is used extensively in computer vision, speech and audio applications, and machine comprehension, Kari elaborates.
NVIDIA recently announced their new Tensor Cores, which maximize their GPUs and make it easier for users to achieve peak performance. Working with the Tensor Cores, TensorFlow AMP is an acceleration into the TensorFlow Framework. It automatically makes the right choices for neural networks and maximizes performance, while still maintaining accuracy, with only a two line change in Tensor Flow script.
Just last year, NVIDIA announced their T4 GPU with Google Cloud Platform. This product is designed for inferences, the other side of AI. Because AI is becoming so advanced, complicated, and fast, the GPUs on the inference side have to be able to handle the workload and produce inferences just as quickly. T4 and Google Cloud accomplish this together. Along with T4, NVIDIA has introduced TensorRT, a software framework for AI inference that’s integrated into TensorFlow.
World Pi Day is behind us, but our guest today, Emma Iwao, joins hosts Gabi and Mark to teach us all about pi. Pi is the constant of the ratio of a circle’s circumference to its diameter. Anytime you see a circle on a computer, pi has been used. It’s vital for everything from gaming to calculating rocket trajectories!
Emma crushed the world record for calculating digits of pi using Google Cloud over four months! Listen in to hear more about how she did it!
Jon Foust is back with Mark this week as we talk about SAP HANA, a data and application platform. Lucia Subatin and Kevin Nelson elaborate, explaining that SAP HANA is engineered for running SAP business applications. It is capable of handling large transactions very quickly and with great flexibility. With HANA, you don’t move data around, so you can run transaction workloads, as well as analytics, etc. in the same platform.
By teaming up with GCP, SAP HANA ensures that their enterprise users will have scalability and storage no matter how their businesses grow. GCP and SAP HANA developers have been working together to continue to make the products better.
Mark and Brian Dorsey spend today talking Python with Dustin Ingram. Python is an interpreted, dynamically typed language, which encourages very readable code. Python is popular for web applications, data science, and much more! Python works great on Google Cloud, especially with App Engine, Compute Engine, and Cloud Functions. To learn more about best (and worst) use cases, listen in!
Node.js is our topic this week as Mark and first-time host, Jon Foust, pick the brain of Myles Borins. Myles updates us on all the new things happening with Node.js, including the new .dev site that holds a ton of documentation to help people get started. Node.js now integrates with Cloud Build, the Node.js foundation has some new developments, and Google App Engine supports Node.js. The group has also been working on serverless containers.
We’re learning all about Cloud SQL this week with our guest, Amy Krishnamohan. Amy’s main job is to teach customers about the products she represents. Today, she explains to Mark and Gabi that Cloud SQL manages services for open source databases, and she spends a little time elaborating on the other database management services Google has to offer.
Cloud SQL is a relational data storage solution. Relational data storage is very structured, almost like a table or spreadsheet, making it easier to analyze the data. Cloud SQL is capable of scaling out and up, meaning it can scale for traffic patterns and for storage. In comparison, NoSQL databases are very unstructured. If you’re not sure what kind of data is coming in, you can sort the data first and analyze it later. Each approach has its pros and cons and each is suitable for different types of projects.
Recently, Cloud SQL released a feature making it easy to move from on-prem to the cloud. In the future, they will continue to streamline the process of moving between the two spaces.
Today, Mohamed El-Geish joins us to talk about the voice AI technology powering Voicea. Gabi is back on the host bench with Mark as we learn how Voicea can improve productivity. EVA, the voice assistant, will record important information for you so you can focus on your meeting and will create tasks lists to help you stay organized. Voicea integrates well with multiple platforms to help accomplish your goals as well. You can send messages to Slack, add tasks to your Basecamp list, and more.
Mohamed explains the process of building Voicea and how machine learning techniques and user feedback have helped make it such a useful tool. Now, Voicea is working to incorporate video, allowing users to play back things like important meeting slides.
First-time host, Aja, joins Mark today to talk Go Cloud Functions with two Google colleagues! Stewart, lead Product Manager on Google Cloud Functions, and Tyler, Developer Programs Engineer at Google, start the show by explaining the purpose of Cloud Functions. It is a severless compute product that supports many programming languages, scales automatically, and only charges for what you use. It works best as event-driven computing, in other words, when something happens, you want something else to happen in response. Cloud Functions also works well between clouds or even Google Cloud services, acting as the glue between them.
Go Cloud Functions works specifically for Go. Google makes a huge effort to make Cloud Functions easy to use for all developers, so that no matter what language you’re familiar with, Cloud Functions works for you.
We’re back! This week, Mark welcomes Gabi as his new co-host! Listen in as they discuss Knative with Mark Chmarny and Ville Aikas.
So what is Knative? Mark and Ville explain that Knative is basically a way to simplify Kubernetes for developers. This way, developers can focus on writing good code without worrying about all the aspects of Kubernetes, such as deploying and autoscaling. Knative helps with these functions automatically. Knative also supports many languages which allows developers to bring their own stack. The day-to-day of developing doesn’t change, which is the beautiful thing about Knative!
Knative is open source and easy to deploy. Developers can find installation guides online for any Kubernetes certified instance of service. A link to the installation guide for Knative on GKE is in our show notes.
Happy Holidays, everyone! Melanie and Mark wrap up a great year by reminiscing about some of their favorite episodes! We also talk about the big news of the year, our favorite articles, and what’s coming up for the GCP Podcast in 2019.
Melanie and Mark talk with Google Cloud’s VP of Engineering, Melody Meckfessel, this week. In her time with Google Cloud, she and her team have worked to uncover what makes developers more productive. The main focus of their work is DevOps, defined by Melody as automation around the developer workflow and culture. In other words, Melody and her team are discovering new ways for developers to interact and how those interactions can encourage their productive peak.
Melody and her team have used their internal research and expanded it to collaborate with Google Cloud partners and open source projects. The sharing of research and products has created even faster innovation as Google learns from these outside projects and vice versa.
In the future, Melody sees amazing engagement with the community and even better experiences with containers on GCP. She is excited to see the Go community growing and evolving as more people use it and give feedback. Melody also speaks about diversity, encouraging everyone to be open-minded and try to build diverse teams to create products that are useful for all.
Melanie is solo this week talking with Anima Anandkumar, a Caltech Bren professor and director of ML research at NVIDIA. We touch on tensors, their use, and how they relate to TensorFlow. Anima also details the work she does with NVIDIA and how they are helping to advance machine learning through hardware and software. Our main discussion centers around AI and machine learning research conferences, specifically the Neural Information Processing Systems conference (commonly referred to as NIPS) and the reason they have rebranded.
NIPS originally started as a small conference at Caltech. As deep learning became more and more popular, it grew exponentially. With the higher attendance and interest, the acronym became center stage. Sexual innuendos and harassing puns surrounded the conference, sparking a call for a name change. At first, conference organizers were reluctant to rebrand and they used recent survey results as a reason to keep NIPS.
Anima discusses her personal experience protesting the acronym, opening up about the hate speech and threats of which she and others received. Despite the harassment, Anima and others continued to protest, petition, and share stories of mistreatment within the community which helped lead to the name/acronym change to NeurIPS. The rebranding hopes to reestablish an inclusive academic community and move the focus back to machine learning research and away from unprofessional attention.
Happy Thanksgiving! On this episode, Mark and Melanie learn about Grace.health with co-founders Therese Mannheimer and Roman Jasins. Grace.health’s goal is to give women control of information about their own bodies, allowing them to make informed healthcare decisions. Grace.health is a female health companion that lets women track and plan their periods, fertility, and ask questions. It is a global undertaking, hoping to reach not just the tech savvy woman, but all women in all markets worldwide.
Stigmas and taboos around the world portray periods as dirty and contagious, preventing women from being able to work, go to school, or even sleep in the house. Grace.health’s goal is to educate people to help limit these superstitions and allow women to live fuller lives. With Grace.health, women know when their periods are coming or when the are ovulating so they can make the proper plans.
In the longterm, Grace.health hopes to be a tool to not only help women identify any health concerns but also find a healthcare professional and get the treatment necessary.
Viktor Gamov is on the podcast today to discuss Confluent and Kafka with Mark and special first-time guest host, Michelle. Viktor spends time with Mark and Melanie explaining how Kafka allows you to stream and process data in real-time, and how Kafka helps Confluent with its advanced streaming capabilities. Confluent Cloud helps connect Confluent and cloud platforms such as Google Cloud so customers don’t have to manage anything - Confluent takes care of it for you!
To wrap up the show, Michelle answers our question of the week about Next 2019.
Joanna Smith and Alicia Williams talk G Suite with Mark and Melanie this week! G Suite is Google’s collection of apps to help make working easier. It includes things like Gmail, Docs, Sheets, Calendar, and more and is designed to be collaborative. It’s customizable, allowing users to adjust the programs to their needs and be more effective – including integrating it with Google Cloud! G Suite has an active community of developers building add-ons to increase functionality as well.
Happy Halloween! On this not-so-spooky episode of the Google Cloud Podcast, Melanie and Mark talk with Tony Aiuto of Bazel. Bazel grew from Google’s internal build system, Blaze, to become the open source Bazel that it is today. The aim of the project is to quickly make very large builds across multiple languages.
On the podcast today, we have two more fascinating interviews from Melanie’s time at Deep Learning Indaba! Mark helps host this episode as we speak with Karim Beguir and Muthoni Wanyoike about their company, Instadeep, the wonderful Indaba conference, and the growing AI community in Africa.
Instadeep helps large enterprises understand how AI can benefit them. Karim stresses that it is possible to build advanced AI and machine learning programs in Africa because of the growing community of passionate developers and mentors for the new generation. Muthoni tells us about Nairobi Women in Machine Learning and Data Science, a community she is heavily involved with in Nairobi. The group runs workshops and classes for AI developers and encourages volunteers to participate by sharing their knowledge and skills.
Mark and Melanie speak with Patrick Flynn and Mike Eltsufin about their exciting new Java products for Google Cloud. Mike tells us all about the new Spring Cloud GCP, a helpful tool that integrates Google Cloud Platform APIs and the Spring Framework. Patrick elaborates on his team’s new tool, Jib, a Java container image builder, and how it helps Java developers.
Melanie and Mark celebrate their 150th episode this week with a high-energy interview of mutual friend, KF, at Strange Loop. KF gives her perspective on Strange Loop, working remotely, and distributed systems. She compliments Strange Loop for the diversity it has achieved as the conference has grown. She laments the lack of introductory material for distributed systems learners, saying it’s not as complicated as everyone thinks but needs more educational material for beginners! In general, she believes everyone could benefit from some code study, especially if you can find a good mentor. KF also gives us some great tips for working remotely and staying effective and social.
Today, Melanie brings you another great interview from her time at Deep Learning Indaba in South Africa. She was joined by Yabebal Fantaye and Jessica Phalafala for an in-depth look at the deep learning research that’s going on in the continent.
At the African Institute for Mathematical Sciences, the aim is to gather together minds from all over Africa and the world to not only learn but to use their distinct perspectives to contribute to research that furthers the sciences. Our guests are both part of this initiative, using their specialized skills to expand the abilities of the group and stretch the boundaries of machine learning, mathematics, and other sciences.
Yabebal elaborates on the importance of AIMS and Deep Learning Indaba, noting that the more people can connect with each other, the more confidence they will gain. Jessica points out how this research in Africa can do more than just advance science. By focusing on African problems and solutions, machine learning research can help increase the GDP and economic standards of a continent thought to be “behind”.
In our last (but not least!) interview from NEXT, Mark and Melanie talked with Sivan Aldor-Noiman and Erik Andrejko about Wellio, an awesome new platform that combines AI and healthy eating. Wellio was developed as a way to not only educate users on the importance of proper nutrition for well-being but to give them their own personal nutritionist.
The data scientists at Wellio started from scratch (pun intended) to create their own food-related database and then began training models so the data could be organized and personalized. Using a combination of human power and machine learning techniques, Wellio learns your preferences, allergies, diets, etc. and will make healthy decisions for you based on these key facts. It chooses recipes, populates a grocery list, and even has the ingredients delivered to your door in time for dinner!
This week we are bringing you a couple of interviews from last week’s Deep Learning Indaba conference. Dr. Vukosi Marivate, Andrea Bohmert and Yasin(i) Musa Ayami talk about the burgeoning machine learning community, research, companies and AI investment landscape in Africa. While Mark is at Google Cloud Next in Tokyo, Melanie is joined by special guest co-hosts Nyalleng Moorosi and Willie Brink.
Vukosi and Yasin(i) share how Deep Learning Indaba is playing an important role to recognize and grow machine learning research and companies on the African continent. We also discuss Yasin(i)’s prototyped app, Tukuka, and how it won the Maathai Award which is given to individuals who are a positive force for change. Tukuka is being built to aid economically disadvantaged women in Zambia get access to financial resources that are currently unavailable. Andrea rounds up the interviews by giving us a VC perspective on the AI start-up landscape in Africa and how that compares to other parts of the world. As Nyalleng says at the end, AI is happening in Africa and has great potential for impact.
Jeff Dean, the lead of Google AI, is on the podcast this week to talk with Melanie and Mark about AI and machine learning research, his upcoming talk at Deep Learning Indaba and his educational pursuit of parallel processing and computer systems was how his career path got him into AI. We covered topics from his team’s work with TPUs and TensorFlow, the impact computer vision and speech recognition is having on AI advancements and how simulations are being used to help advance science in areas like quantum chemistry. We also discussed his passion for the development of AI talent in the content of Africa and the opening of Google AI Ghana. It’s a full episode where we cover a lot of ground. One piece of advice he left us with, “the way to do interesting things is to partner with people who know things you don’t.”
Listen for the end of the podcast where our colleague, Gabe Weiss, helps us answer the question of the week about how to get data from IoT core to display in real time on a web front end.
Our guest today is Dr. Mario Lassnig, a software engineer working on the ATLAS Experiment at CERN! Melanie and Mark put on their physics hats as they learn all about what it takes to manage the petabytes of data involved in such a large research project.
This week we learn about how Mercari is handling migrating from an on-prem monolithic infrastructure to cloud microservices architecture with GKE. Terry and Taichi share with Melanie and Mark what drove the decision for the change, the challenges and what the team has learned from the transition. The real value for this change has been about making the platform more scalable as they grow to meet the needs of their millions of daily active users. It’s another great interview we captured out of Google NEXT.
Mark and Melanie are your hosts again this week as we talk with Steren Giannini and Stewart Reichling discussing what’s new with App Engine. Particularly its new second generation runtime, allowing headless Chrome, and better language support! And automatic scalability to make your life easier, too. App Engine also has an interesting way of inspiring new Google products. Tune in to learn more!
Mark Mandel is in the guest seat today as Melanie and our old pal Francesc interview Cyril Tovena of Ubisoft and Mark about Agones. We discuss dedicated game servers and their importance in game performance, how Agones can make hosting and scaling dedicated game servers easier to manage, and the future of Agones. Cyril and Mark elaborate on Ubisoft’s relationship with Google and how it’s progressing the world of gaming. Listen in!
On this episode of the podcast we continue a conversation we started with Haben Girma, an advocate for equal rights for people with disabilities, regarding the value of tech accessibility. Melanie and Mark talk with her about common challenges and best practices when considering accessibility in technology design and development. Bottom line - we need one solution that works for all.
It’s the third and final day for us at NEXT, and Mark and Melanie are wrapping up with some great interviews! First, we spoke with Stephanie Cueto and Vivian San of Techtonica, a San Francisco non-profit. Next, Liz Fong-Jones and Nikhita Raghunath joined us for a quick discussion about open source and Stackdriver and last but not least, Robert Kubis helped us close things sharing what it means to do DevRel at this event.
Day two of NEXT was another day full of interesting interviews! Melanie and Mark sat down for quick chats with Haben Girma about accessibility in tech and Paresh Kharya to talk about NVIDIA. Next, we touched base with Amruta Gulanikar and Simon Zeltser to learn more about Windows SQL Server and .NET workloads on Google Cloud. The interviews wrap up with Henry Hsu & Isaac Wong of Holberton.
On this very special episode of the Google Cloud Platform Podcast, we have live interviews from the first day of NEXT! Melanie and Mark had the chance to chat with Melody MeckFessel, VP of Engineering at Google Cloud and Pavan Srivastava of Deloitte. Next we spoke with Sandeep Dinesh about Open Service Broker and Raejeanne Skillern of Intel.
On this episode of the podcast, Mark and Melanie delve into the fascinating world of robotics and reinforcement learning. We discuss advances in the field, including how robots are learning to navigate new surroundings and how machine learning is helping us understand the human mind better.
On this episode of the podcast, Melanie and Mark talk with Emiliano (Emi) Martínez to learn more about how VirusTotal is helping to create a safer internet by providing tools and building a community for security researchers.
Happy 4th of July! Today, Melanie and Mark go in depth with Brett Bibby and Micah Baker to learn more about Unity and its new strategic alliance with Google Cloud. We explore how an alliance between Google Cloud and Unity means easier development for game creators and better gaming for fans.
Brahim Elbouchikhi and Sachin Kotwani talk with Melanie and Mark about Firebase’s ML Kit and how it enables machine learning on mobile and cloud apps. We delve into why ML Kit was developed, how it makes machine learning easier, what it’s used for now and plans for the future.
Thadeu Luz from Hand Talk shares with Melanie and Mark how the free Hand Talk education application translates and interprets spoken and written Portuguese into Brazilian Sign Language (aka LIBRAS or BSL). The application uses an animated avatar Hugo to deliver the signs through gestures and facial expressions and its built off of a statistical machine translation system and Firebase. Future plans include expanding into other languages with a priority on ASL and they welcome support.
Juliet Hougland and Michelle Casbon are on the podcast this week to talk about data science with Melanie and Mark. We had a great discussion about methodology, applications, tools, pipelines, challenges and resources. Juliet shared insights into the unique data science ownership workflow from idea to deployment at Stitch Fix, and Michelle dove into how Kubeflow is playing a role to help drive reliability in model development and deployment.
Mandy Waite joins Mark and Melanie to share what is developer relations and how trust and empathy are key to its success. We discuss meeting developers where they are and the wide variety of differing communities that exist across the technology ecosystem.
Mark and Melanie are joined by
Sarah Novotny, Head of Open Source Strategy for Google Cloud Platform, to
talk all about Open Source, the Cloud Native Compute Foundation and their relationships to Google Cloud Platform.
Nick Sullivan, and Adam Langley join Melanie and Mark to provide a pragmatic view on post-quantum cryptography and what it means to research security for the potential of quantum computing. Post-quantum cryptography is about developing algorithms that are resistant to quantum computers in conjunction with “classical” computers. It’s about looking at the full picture of potential threats and planning on how to address them using a diversity of types of mathematics in the research. Adam and Nick help clarify the different terminology and techniques that are applied in the research and give a practical understanding of what to expect from a security perspective.
Jessica Forde, Yuvi Panda and Chris Holdgraf join Melanie and Mark to discuss Project Jupyter from it’s interactive notebook origin story to the various open source modular projects it’s grown into supporting data research and applications. We dive specifically into JupyterHub using Kubernetes to enable a multi-user server. We also talk about Binder, an interactive development environment that makes work easily reproducible.
Bryan Catanzaro, the VP Applied Deep Learning Research at NVIDIA, joins Mark and Melanie this week to discuss how his team uses applied deep learning to make NVIDIA products and processes better. We talk about parallel processing and compute with GPUs as well as his team’s research in graphics, text and audio to change how these forms of communication are created and rendered by using deep learning.
This week we are also joined by a special co-host, Sherol Chen who is a developer advocate on GCP and machine learning researcher on Magenta at Google. Listen at the end of the podcast where Mark and Sherol chat about all things GDC.
Product Manager Morgan McLean and Software Engineer JBD join Melanie and Mark this week
to discuss the new open source project OpenCensus, a single distribution of libraries for metrics and distributed tracing
with minimal overhead that allows you to export data to multiple backends.
Dr. Fei-Fei Li, the Chief Scientist of AI/ML at Google joins Melanie and Mark this week to talk about how Google enables businesses to solve critical problems through AI solutions. We talk about the work she is doing at Google to help reduce AI barriers to entry for enterprise, her research with Stanford combining AI and health care, where AI research is going, and her efforts to overcome one of the key challenges in AI by driving for more diversity in the field.
We have the pleasure this week of having the Director of Solutions for Google Cloud Miles Ward and Cloud Solutions Architect Grace Mollison
join Mark and Melanie to discuss Solution Architects, what they
do and how they interact with Customers at Google Cloud Platform.
In this episode, Google Play Marketing is the customer of Google Cloud Platform.
Melanie and Mark chat with Dom Elliott (Google Play) and Stewart Bryson (Red Pill Analytics)
about how they use our big data processing and visualisation tools to introspect what is happening in the Google play ecosystem.
This week, we dive into machine learning bias and fairness from a social and technical perspective with machine learning research scientists Timnit Gebru from Microsoft and Margaret Mitchell (aka Meg, aka M.) from Google.
They share with Melanie and Mark about ongoing efforts and resources to address bias and fairness including diversifying datasets, applying algorithmic techniques and expanding research team expertise and perspectives. There is not a simple solution to the challenge, and they give insights on what work in the broader community is in progress and where it is going.