Posted by Toni Klopfenstein, Developer Advocate
When creating scalable applications, consistent and reliable monitoring of resources is a valuable tool for any developer. Today we are releasing enhanced analytics and logging for Smart Home Actions. This feature enables you to more quickly identify and respond to errors or quality issues that may arise.
Request Latency Dashboard
You can now access the smart home dashboard with pre-populated metrics charts for your Actions on the Analytics tab in the Actions Console, or through Cloud Monitoring. These metrics help you quantify the health and usage of your Action, and gain insight into how users engage with your Action. You can view:
Successful Requests Dashboard
Cloud Logging provides detailed logs based on the events observed in Cloud Monitoring.
We've added additional features to the error logs to help you quickly debug why intents fail, which particular device commands malfunction, or if your local fulfilment falls back to cloud fulfilment.
New details added to the event logs include:
You can additionally export these logs through Cloud Pub/Sub, and build log-based metrics and alerts for your development teams to gain insights into common issues.
For more guidance on accessing your Smart Home Action analytics and logs, check out the developer guide or watch the video.
We want to hear from you! Continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!
Posted by Toni Klopfenstein, Developer Relations
Over the past year, we've been focused on building new tools and features to support our smart home developer community. Though we weren't able to engage with you in person at Google I/O, we are pleased to announce the "Hey Google" Smart Home Virtual Summit on July 8th - an opportunity for us to come together and dive into the exciting new and upcoming features for smart home developers and users.
Join us in the keynote where Michele Turner, the Product Management director of the Smart Home Ecosystem, will share our recent smart home product initiatives and how developers can benefit from these capabilities. She will also introduce new tools that make it easier for you to develop with Google Assistant. We will also be hosting a partner panel, where you can hear from industry leaders on how they navigate the impact of COVID-19 and their thoughts on the state of the industry.
Registration is FREE! Head on over to the Summit website to register and check out the schedule. Events will be held during EMEA, APAC, and AMER friendly times. We hope to see you and your colleagues there!
Posted by Rajat Paharia, Product Lead, AR Platform
Since the launch of ARCore, our developer platform for building augmented reality (AR) experiences, we've been focused on providing APIs that help developers seamlessly blend the digital and physical worlds.
At the end of last year, we announced a preview of the ARCore Depth API, which uses our depth-from-motion algorithms to generate a depth map with a single RGB camera. Since then, we’ve been working with select collaborators to explore how depth can be used across a range of use cases to enhance AR realism.
Today, we're taking a major step forward and announcing the Depth API is available in ARCore 1.18 for Android and Unity, including AR Foundation, across hundreds of millions of compatible Android devices.
Generate a depth map without specialized hardware to unlock capabilities like occlusion
As we highlighted last year, a key capability of the Depth API is occlusion: the ability for digital objects to accurately appear behind real world objects. This makes objects feel as if they’re actually in your space, creating a more realistic AR experience.
Illumix, the game studio behind Five Nights at Freddy’s AR: Special Delivery, uses occlusion to deepen the realism of the experience by allowing certain characters to hide behind objects for more startling jump scares.
Play Five Nights at Freddy’s AR: Special Delivery
While occlusion is an important capability, the ARCore Depth API unlocks more ways to increase realism and enables new interaction types. The ARCore Depth Lab spurred more ideas on how depth can be used, including realistics physics, surface interactions, environmental traversal, and more. Developers can now build on these ideas through the open sourced GitHub project.
Experiment with ARCore Depth Lab on the Google Play Store
The designers and engineers at Snap Inc. integrated several of these ideas into a set of Snapchat Lenses including the Dancing Hotdog and a new Android exclusive Undersea World Lens.
See how depth can add a layer of realism to your Snapchat experience
Snapchat Lens Creators can now download an ARCore Depth API template to create depth-based experiences for compatible Android devices. Sam Hare, Research Engineering Manager at Snap Inc, expressed his excitement, “We’re beginning to understand what kinds of depth capabilities are exciting for developers to build with. This single integration point streamlines and simplifies the development process and enables Lens Studio developers to easily take advantage of advanced depth capabilities.”
Another app that combines occlusion with other depth capabilities is Lines of Play, an Android experiment from the Google Creative Lab. Lines of Play lets users create domino art in AR, and uses depth information to showcase both occlusion and collisions. Design elaborate domino creations, topple them over and watch them collide with the furniture and walls in your room.
Watch as domino pieces topple into each other and onto your walls with Lines of Play
In addition to gaming and self-expression, depth can also be used to unlock new utility use cases. For example, the TeamViewer Pilot app, a remote assistance solution that enables AR annotations on video calls, uses depth to better understand the environment so experts around the world can more precisely apply real time 3D AR annotations for remote support and maintenance.
3D annotations help experts accurately highlight details in the TeamViewer Pilot app
Later this year, you will be able to try more depth-enabled AR experiences such as SKATRIX by Reality Crisis and SPLASHAAR by ForwARdgames, that use surface interactions and environmental traversal as they make rich use of the environment around you.
Check out surface interactions and environmental traversal in SKATRIX, and SPLASHAAR
While depth sensors, such as time-of-flight (ToF) sensors, are not required for the Depth API to work, having them will further improve the quality of experiences. Dr. Soo Wan Kim, Camera Technical Product Manager at Samsung commented on the future that the Depth API and ToF unlocks saying, “Depth will enrich user's AR experience in many perspectives. It will reduce scanning time, and can detect planes fast, even low textured planes. These will bring seamless experiences to users who will be able to use AR apps more easily and frequently.” In the coming months, Samsung will update their Quick Measure app to use the ARCore Depth API on the Galaxy Note10+ and Galaxy S20 Ultra.
Accurately measure with Quick Measure
To learn more and get started with the ARCore Depth API, get the SDK and visit the ARCore developer website.
Posted by Thye Yeow Bok, Developer Relations Program Manager
Apply now for the Google for Startups Accelerator: Southeast Asia
In the last few months, COVID-19 has ushered in an era of profound changes to the way we live and work, causing businesses to rethink strategies and product roadmaps. At the forefront of this change are startups, stepping up to solve for new and unforeseen challenges as they always have done — with agility, innovative technology, and resilience.
The Southeast Asia startup ecosystem has always been a hotbed for creativity and innovation. This part of the world has a rich history of homegrown entrepreneurs armed with solutions, oftentimes growing their local ideas to global companies. Startups are primed to develop solutions for the unique challenges we face today and we are committed to supporting them in that effort.
Today, I’m thrilled to announce that applications are open for Google for Startups Accelerator: Southeast Asia. This is a three month online accelerator program for high potential, early stage tech startups across the Southeast Asia region (Indonesia, Singapore, Malaysia, Thailand, Vietnam and the Philippines) and Pakistan. And this year, we’re looking particularly for startups who are solving for the challenges we face today: whether that’s startups looking at new healthcare, education, finance or logistics solutions in light of social distancing restrictions; using AI , ML or data analysis in meaningful ways; or using technology to make the world more inclusive for the elderly or people with disabilities.
Previously known as the Google Launchpad Accelerator, this program continues our longstanding commitment to help startups solve specific, technical challenges with Google support and resources. As part of the Google for Startups Accelerator, selected founders outline the top challenges their startup is facing, and are paired with relevant experts from Google and the industry to solve those challenges. Participating startups receive deep mentorship on both technical and business challenges as well as connections to relevant teams from across Google and its network of industry partners. In addition to mentorship and technical project support, the accelerator also includes deep dives and workshops focused on product design, customer acquisition, and leadership development for founders.
Applications are now open through July 19th 2020 for startups across Southeast Asia (Indonesia, Singapore, Malaysia, Thailand, Vietnam and the Philippines) and also in Pakistan.
We know that if startups succeed, our communities and economies do, too. We look forward to working with the next generation of founders and innovators who will help shape our economic recovery, and build a stronger long-term future in Southeast Asia and beyond.
Posted by Priyanka Vergadia, Developer Advocate
Google Cloud is a cloud computing platform that can be used to build and deploy applications. It allows you to take advantage of the flexibility of development while scaling the infrastructure as needed.
I'm often asked by developers to provide a list of Google Cloud architectures that help to get started on the cloud journey. Last month, I decided to start a mini-series on Twitter called “#13DaysOfGCP" where I shared the most common use cases on Google Cloud. I have compiled the list of all 13 architectures in this post. Some of the topics covered are hybrid cloud, mobile app backends, microservices, serverless, CICD and more. If you were not able to catch it, or if you missed a few days, here we bring to you the summary!
Series kickoff #13DaysOfGCP
Day 1
Day 2
Day 3
Day 4
Day 5
Day 6
Day 7
Day 8
Day 9
Day 10
Day 11
Day 12
Day 13
Wrap up!
We hope you enjoy this list of the most common reference architectures. Please let us know your thoughts in the comments below!
Posted by Soc Sieng, Developer Advocate
The Google Pay API enables fast, simple checkout for your website.
The Google Pay JavaScript library does not depend on external libraries or frameworks and will work regardless of which framework your website uses (if it uses any at all). While this ensures wide compatibility, we know that it doesn’t necessarily make it easier to integrate when your website uses a framework. We’re doing something about it.
React is one of the most widely-used tools for building web UI's, so we are launching the Google Pay Button for React to provide a streamlined integration experience. This component will make it easier to incorporate Google Pay into your React website whether you are new to React or a seasoned pro, and similarly, if this is your first Google Pay integration or if you’ve done this before.
We’re making this component available as an open source project on GitHub and publishing it to npm. We’ve authored the React component with TypeScript to bring code completion to supported editors, and if your website is built with TypeScript you can also take advantage of type validation to identify common issues as you type.
Get real time code completion and validation as you integrate with supported editors.
The first step is to install the Google Pay button module from npm:
npm install @google-pay/button-react
The Google Pay button can be added to your React component by first importing it:
import GooglePayButton from '@google-pay/button-react';
And then rendering it with the necessary configuration values:
<GooglePayButton environment="TEST" paymentRequest={{ ... }} onLoadPaymentData={() => {}} />
Try it out for yourself on JSFiddle.
Refer to component documentation for a full list of supported configuration properties.
Note that you will need to provide a Merchant ID in paymentRequest.merchantInfo to complete the integration. Your Merchant ID can be obtained from the Google Pay Business Console.
paymentRequest.merchantInfo
Your Merchant ID can be found in the Google Pay Business Console.
We also want to provide an improved developer experience for our developers using other frameworks, or no framework at all. That’s why we are also releasing the Google Pay button Custom Element.
Custom elements are great because:
Like the React component, the Google Pay button custom element is hosted on GitHub and published to npm. In fact, the React component and the custom element share the same repository and large portion of code. This ensures that both versions maintain feature parity and receive the same level of care and attention.
Try it out on JSFiddle.
There's no change to the existing Google Pay JavaScript library, and if you prefer, you can continue to use this directly instead of the React component or custom element. Both of these components provide a convenience layer over the Google Pay JavaScript library and make use of it internally.
This is the first time that we (the Google Pay team) have released a framework specific library. We would love to hear your feedback.
Aside from React, most frameworks can use the Web Component version of the Google Pay Button. We may consider adding support for other frameworks based on interest and demand.
If you encounter any problems with the React component or custom element, please raise a GitHub issue. Alternatively, if you know what the problem is and have a solution in mind, feel free to raise a pull request. For other Google Pay related requests and questions, use the Contact Support option in the Google Pay Business Console.
Today, we're expanding the support of the Local Home SDK to the Google Nest Wifi routers with the latest firmware update to M81. The Local Home SDK we recently launched allows you to create a local fulfilment path for your smart home Action. Local fulfillment provides lower latency and higher reliability for your smart home Action.
By adding support for the Node.js runtime of the Nest WiFi routers, the Local Home platform is now compatible with the full Nest WiFi system. This update means your local execution application can run on a self-healing mesh wireless network, and your users gain the benefits of expanded reliable home automation coverage.
To support this additional runtime, we've updated the Actions Console to enable you to add the Node.js on-device testing URL. The Nest WiFi routers will receive the the node-targeted bundle.js files you've already uploaded during deployment of your Action automatically. Since Chrome DevTools have built-in Node.js support, your development flow doesn't require any additional tools for inspecting your Node.js app or debugging your smart home Action.
bundle.js
We have updated the developer guide and tools to help guide you through the various local fulfilment runtimes and features of these toolings. For additional guidance on enabling local fulfilment for your smart home Action, check out the Enable local fulfillment for smart home Actions codelab. The API reference and samples can also help you build your first local fulfilment app.
Posted by the Assistant Developer Platform team
Since the launch of the Google Assistant, our developer ecosystem has been instrumental in delivering compelling voice experiences to more than 500 million active users. Today, we’re taking a major step forward in helping you build these custom voice apps and services by introducing a suite of new and improved developer tools: Actions Builder and Actions SDK. These tools make building Conversational Actions for the Assistant easier and more streamlined than ever.
Actions Builder is a web-based IDE that lets you develop, test, and deploy directly in the Actions console. The graphical interface lets you visualize the conversational flow, manage Natural Language Understanding (NLU) training data, and debug with advanced tools.
For those of you who prefer local IDEs, the updated Actions SDK provides a file based representation of your Actions project. This lets you author NLU training data and conversational flows locally as well as bulk import and export training data. We've also updated the CLI that accompanies Actions SDK, so you can build and manage Actions projects completely with code, using your favorite source control and continuous integration tools.
Together, Actions Builder and Actions SDK create a seamless, consolidated development experience. No matter what tool you start with, you can switch between them based on what works best for your workflow. For example, you can use Actions Builder to lay out conversational flows and provide NLU training data, Actions SDK to write fulfillment code, and the CLI to synchronize the two. These tools create an environment where all team members can contribute effectively and focus on what they do best: design and code.
A new, powerful interaction model lets you design conversations quickly and efficiently. Intents and scenes let you define robust NLU training data and behavior for specific conversational contexts. Using scenes as building blocks, you define active intents, declare context specific error handling, collect data through slot filling, and respond with prompts.
Scenes also separate conversational flow definitions from fulfillment logic, so you can reuse the same flows across multiple conversations. Transitions between scenes let you define when one conversational context switches to another. All your scenes and transitions describe a full conversational flow and all possible dialog turns.
You can express the entire interaction model with either the Actions Builder or Actions SDK. A typical way to develop is to use Actions Builder to view and edit your scenes and then use Actions SDK to sync changes to your local file system. This lets you version control your project, modify your project files, and build fulfillment in your favorite development environment.
Under the hood, we also made a lot of improvements that your users will appreciate. We sped up the Assistant runtime engine, so users get faster responses and a smoother experience. We’ve also made the runtime engine smarter, so your Actions can understand users better with the same amount of training data.
We've worked with Pretzel Labs and Galinha Pintadinha to test the capabilities of the new platform and to refine the interaction model and runtime engine improvements.
Pretzel Labs built Kids Court with Actions Builder, creating a full conversational flow with no code and added fulfilment for advanced functionality.
"Having the combination of a visual layout with webhook blocks for code helps us collaborate clearly and more efficiently. Something I liked very much about this was the separation between the designer and the developers' parts, making it very intuitive to make design changes without affecting backend logic." -- Adva Levin, founder of Pretzel Labs
Galinha Pintadinha runs one of the biggest YouTube channels and built one the most popular Conversational Actions in their country. Their development team migrated to the new platform to optimize their workflow and simplify future Action development. Galinha Pintadinha’s Actions now contain half the number of intents and have a radically simplified conversation tree. Using features like contextual error handling, they were able to improve the user experience and quality with little to no cost.
"Actions Builder is a robust and well designed toolbox for developing conversational apps. The concept of scenes and transitions helped us define the flow of our Action in a much more streamlined way."-- Mário Neto, engineer at Galinha Pintadinha
To learn more about Actions Builder and SDK and to start developing your next Actions, check out our new developer resources. Our codelabs will walk you through using the new tooling and interaction model. Samples for all major features are also available, so you can start playing with code immediately. See the full set of documentation to start building today.
Stay tuned for more platform updates and happy coding!
Posted by Payam Shodjai, Director of Product Management, Google Assistant
Today at VOICE Global, we shared our vision for Google Assistant to truly be the best way to get things done - and the important role that developers play in that vision, especially as voice experiences continue to evolve.
Google Assistant helps more than 500 million people every month in over 30 languages across 90 countries get things done at home and on the go. What’s at the heart of this growth is the simple insight that people want a more natural way to get what they need. That’s why we’ve invested heavily in making sure Google Assistant works seamlessly across devices and services and offers quick and accurate help.
Over the last few months, we’ve seen people’s needs shifting, and this is reflected in how Google Assistant is being used and the role that it can play to help navigate these changes. For example, to help people get accurate information on Search and Maps - like modified store hours or information on pick-up and delivery - we have been using Duplex conversational technology to contact businesses and update over half a million business listings.
We’ve also been working with our partners to bring great educational experiences into the home, so that families can continue learning in a communal setting. Bamboo Learning is bringing their voice-forward education platform to Google Assistant, with fun, new ways to learn history, math, and reading. Our hand-washing songs continue to be popular. The songs leverage WaveNet's natural expressiveness – allowing us to train Google Assistant to sing across numerous generated voices users can pick from.
Great experiences are at the core of what makes Google Assistant truly helpful. To help existing and aspiring developers build new experiences with ease, we are making some major improvements to our core platform and development tools. Rather than needing to hop back and forth between Actions Console and Dialogflow to build an Action, wouldn’t it be great if there were one integrated platform for building on Google Assistant?
Starting today, we’re releasing Actions Builder, a new web-based IDE that provides a graphical interface to show the entire conversation flow. It allows you to manage Natural Language Understanding (NLU) training data and provides advanced debugging tools. And, it is fully integrated into the Actions Console so you can now build, debug, test, release, and analyze your Actions - all in one place.
If you prefer to work in your own tools, you can use the updated Actions SDK. For the first time, you’ll have a file based representation of your action and the ability to use a local IDE. The SDK not only enables local authoring of NLU and conversation schemas, but it also allows bulk import and export of training data to improve conversation quality. The Actions SDK is accompanied by a command line interface, so you can build and manage an action fully in code using your favorite source control and continuous integration tools.
With these two releases, we are also introducing a new conversation model and improvements to the runtime engine. Now, it’s easier to design and build conversations and users will get faster and more accurate responses. We’re very excited about this suite of products which replaces Dialogflow as the preferred way to develop conversational actions on Google Assistant.
Based on feedback from developers, we’re also adding new functionality to build more interactive experience on Google Assistant with Home Storage, updated Media API and Continuous Match Mode.
One of the exciting things about speakers and smart displays is that they’re communal. Home Storage is a new feature that provides a communal storage solution for devices connected on the home graph and allows developers to save context for all individual users, such as the last saved point from a puzzle game.
Our updated Media APIs now support longer-form media sessions and lets users resume playback of content across surfaces. For example, users can start playback from a specific moment or resume where they dropped out of their previous session.
Sometimes you want to build experiences that enable users to speak more naturally with your action, without waiting for a change in mic states. Rolling out in the next few months, Continuous Match Mode allows Assistant to respond immediately to a user’s speech for more fluid experiences by recognizing defined words and phrases set by you. This is done transparently so that before the mic opens, Assistant will announce the mic will stay open temporarily so users know they can speak freely without waiting for additional prompts. For example, CoolGames is launching a game in a few weeks called, “Guess The Drawing” that uses Continuous Match Mode to allow the user to continuously guess what the drawing is until they get it right. The game is also built with Interactive Canvas for a more visual and immersive experience on smart displays.
In addition to making it easy for you to build new experiences for Google Assistant, we also want to bring the depth of great web content together with the simple and robust AMP framework to deliver new experiences on Smart Displays. AMP allows you to create compelling, smooth websites that have a great user experience. AMP compliant articles are coming to smart displays later this summer with News. Stay tuned for more updates in the coming months as we expand to enable more web content categories for Smart Displays.
With these tools, we want to empower developers to build helpful experiences of the future with Google Assistant, enabling people to get what they need more simply, while giving them time back to focus on what matters most.
Posted by Greg Wilson, Director of Cloud Developer Advocacy
Google Cloud is excited to announce a global, 24-hour event as part of our new Google Cloud Talks by DevRel Series. The event will begin on June 22 at 5:00 PM Pacific Time (June 23 at 00:00 UTC) and run through June 23 5:00 PM Pacific Time.
This 24-hour event will be a way for cloud developers, admins, operators, and analysts globally to participate in interactive sessions, panels, demos, and Q&As with Google Cloud Developer Relations and experts from the community.
We all miss gathering in person to learn, stay updated on new ideas, and connect, which is why no matter what time it is where you are, you can join and share this virtual experience with developers from around the world.
Organized into three regional segments, the event will include live streamed content with everything from Dialogflow and Cloud AI to serverless and Cloud Run. Check out the agenda and register here.
Posted by Stephanie Cuthbertson, Director, Product Management
Editor’s note: The global community of Android developers has always been a powerful force in shaping the direction of the Android platform; each and every voice matters to us. We have cancelled the virtual launch event to allow people to focus on important discussions around racial justice in the United States. Instead, we are releasing the Android 11 Beta today in a much different form, via short-form videos and web pages that you can consume at your own pace when the time is right for you. Millions of developers around the world build their business with Android, and we're releasing the Beta today to continue to support these developers with the latest tools. We humbly thank those who are able to offer their feedback on this release.
Today, we’re unwrapping the Beta release for Android 11 as well as the latest updates for developers from Kotlin coroutines, to progress on the Jetpack Compose toolkit, to faster builds in Android Studio, even a refreshed experience for the Play Console.
You’ve been helping us with feedback on the Android 11 developer previews since February, and today we released the first Beta of Android 11 focused on three key themes: People, Controls, and Privacy.
People: we’re making Android more people-centric and expressive, reimagining the way we have conversations on our phones, and building an OS that can recognize and prioritize the most important people in your life:
Controls: the latest release of Android can now help you can quickly get to all of your smart devices and control them in one space:
Privacy: In Android 11, we’re giving users even more control over sensitive permissions and working to keep devices more secure through faster updates.
Developer friendliness: We want to make it easy for developers to take advantage of the new release, so to make compat testing easier, we’ve:
Android 11 also includes a number of other developer productivity improvements like wireless ADB debugging, ADB incremental for faster installs of large APKs, and more nullability annotations on platform APIs (to catch issues at build time instead of runtime), and more.
The first Beta for Android 11 is available today, with final SDK and NDK APIs and new features to try in your apps. If you have a Pixel 2, 3, 3a, or 4 device, enroll here to get Android 11 Beta updates over-the-air. As always, downloads for Pixel and the Android Emulator are also available. To learn about all of the developer features in Android 11, visit the Android 11 developer site.
Over the past several years, the Android team has been hard at work improving the mobile developer experience, to make you more productive. This includes the Android Studio IDE, a great language (Kotlin!), Jetpack libraries to make common tasks easy, and Android App Bundles to improve app distribution. Today we call this modern Android development - bringing you the best of Android to make you as efficient and productive as possible.
Today, we released new features in Android Studio 4.1 Beta and 4.2 Canary, focused on a number of crucial asks from developers:
Try out the latest: Android Studio 4.1 Beta and Android Studio 4.2 Canary.
Languages and libraries are a major area of investment in modern Android development, with Kotlin’s modern, concise language and Jetpack’s opinionated powerful libraries all focused around making you more productive.
With the rise in Kotlin adoption (with over 70% of top 1000 apps on Google Play now using Kotlin) and so many developers using Kotlin, we can now use it to simplify your experience in new ways. Kotlin coroutines are a language feature of Kotlin which make concurrent calls much easier to write and understand. We’re making coroutines our official recommendation, and we’ve built coroutines support into 3 of the most-used Jetpack libraries -- Lifecycle, WorkManager, and Room -- so you can write even better code.
Kotlin itself also continues to get better with every release, thanks to the awesome team at Jetbrains. Kotlin 1.4 provides faster code completion, more powerful type inference enabled by default, function interfaces, as well as helpful quality of life improvements like mixing named and positioning arguments.
We also continue to push Jetpack forward - a suite of libraries which spans multiple Android releases and is designed to make common mobile development patterns fast and easy. Many of us have long loved Dagger, so we worked with the Dagger team to bring you Hilt, a developer-friendly wrapper on top of Dagger, as a recommended Dependency Injection solution for Android. You’ll find this in alpha ready to try out. We’ve also added a second new library App Startup, to help both app developers and library developers improve app startup time by optimizing initialization of libraries. We have many more updates to existing libraries as well, including a major update to Paging 3, rewritten Kotlin-first with full support for coroutines!
There’s one more thing you need to be super productive — and that’s a powerful UI toolkit to quickly and easily build beautiful UIs on Android, with native access to the platform APIs. That’s why we’re building Jetpack Compose, our new modern UI toolkit that brings your app to life with less code, powerful tools, and intuitive Kotlin APIs.
Today we are launching Jetpack Compose Developer Preview 2, packed with features developers have been asking us for:
We've also added a number of new capabilities to Android Studio 4.2, in close partnership with Jetbrains Kotlin team, to help you build apps with Compose:
Compose isn’t ready for production use yet, in particular as we finish performance optimizations, but we’d love you to give it a try and share feedback. We plan to launch Alpha this summer and 1.0 next year.
Google Play is focused on helping developers grow their business. With that mission in mind, we've redesigned the Google Play Console to help you maximize your success on our platform. In addition to being clearer and easier to use, we've added features to help you:
Learn more about the new Google Play Console in this post or join the beta now at play.google.com/console. Your feedback helps us continue to improve Google Play Console for everyone, so please let us know what you think.
But there’s so much more we’re launching that we didn’t get to talk about!