Posted by HyeJung Lee, MJ You, ML Ecosystem Community Managers
Google Developers Experts (GDE) is a community of passionate developers who love to share their knowledge with others. Many of them specialize in Machine Learning (ML).
Here are some highlights showcasing the ML GDEs achievements from last quarter, which contributed to the global ML ecosystem. If you are interested in becoming an ML GDE, please scroll down to see how you can apply!
At I/O this year, we held two ML Developers Meetups (America/APAC and EMEA/APAC). Merve Noyan/Yusuf Sarıgöz (Turkey), Sayak Paul/Bhavesh Bhatt (India), Leigh Johnson/Margaret Maynard-Reid (USA), David Cardozo (Columbia), Vinicius Caridá/Arnaldo Gualberto (Brazil) shared their experiences in developing ML products with TensorFlow, Cloud AI or JAX and also introduced projects they are currently working on.
After I/O, many ML GDEs posted recap summaries of the I/O on their blogs. Chansung Park (Korea) outlined the ML keynote summary, while US-based Victor Dibia wrapped up the Top 10 Machine Learning and Design Insights from Google IO 2021.
Vertex AI was the topic of conversation at the event. Minori Matsuda from Japan wrote a Japanese article titled “Introduction of powerful Vertex AI AutoML Forecasting.” Similarly, Piero Esposito (Brazil) posted an article titled “Serverless Machine Learning Pipelines with Vertex AI: An Introduction,” including a tutorial on fully customized code. India-based Sayak Paul co-authored a blog post discussing key pieces in Vertex AI right after the Vertex AI announcement showing how to run a TensorFlow training job using Vertex AI.
Communities such as Google Developers Groups (GDG) and TensorFlow User Groups (TFUG) held extended events where speakers further discussed different ML topics from I/O, including China-based Song Lin’s presentation on TensorFlow highlights and Applications experiences from I/O which had 24,000 online attendees. Chansung Park (Korea) also gave a presentation on what Vertex AI is and what you can do with Vertex AI.
Leigh Johnson (USA) wrote an article titled Soft-launching an AI/ML Product as a Solo Founder, covering GCP AutoML Vision, GCP IoT Core, TensorFlow Model Garden, and TensorFlow.js. The article details the journey of a solo founder developing an ML product for detecting printing failure for 3D printers (more on this story is coming up soon, so stay tuned!)
Demo and code examples from Victor Dibia (USA)’s New York Taxi project, Minori Matsuda (Japan)’s article on AutoML and AI Platform notebook, Srivatsan Srinivasan (USA)’s video tutorials, Sayak Paul (India)’s Distributed Training in TensorFlow with AI Platform & Docker and Chansung Park (Korea)’s curated personal newsletter were all published together on Cloud blog.
Aqsa Kausar (Pakistan) gave a talk about Explainable AI in Google Cloud at the International Women’s Day Philippines event. She explained why it is important and where and how it is applied in ML workflows.
Finally, ML Lab by Robert John from Nigeria, introduces the ML landscape on GCP covering from BigQueryML through AutoML to TensorFlow and AI Platform.
Eliyar Eziz (China) published a book “TensorFlow 2 with real-life use cases”. Gant Laborde from the US authored book “Learning TensorFlow.js” which is published by O'Reilly and wrote an article “No Data No Problem - TensorFlow.js Transfer Learning” about seeking out new datasets to boldly train where no models have trained before. He also published “A Riddikulus Dataset” which talks about creating the Harry Potter dataset.
Hong Kong-based Guan Wang published a research paper, “Iterated Dilated Convolutional Neural Networks for Word Segmentation,” covering state-of-the-art performance improvement, which is implemented on TensorFlow by Keras.
Elyes Manai from Tunisia wrote an article “Become a Tensorflow Certified Developer ” - a guide to TensorFlow Certificate and tips.
Greece-based George Soloupis wrote a tutorial “Fine-tune a BERT model with the use of Colab TPU” on how to finetune a BERT model that was trained specifically on greek language to perform the downstream task of text classification, using Colab’s TPU (v2–8).
India-based Aakash Nain has published the TF-JAX tutorial series (Part1, Part2, Part3, Part 4), aiming to teach everyone the building blocks of TensorFlow and JAX frameworks.
Online Meetup TensorFlow and JAX by Tzer-jen Wei from Taiwan covered JAX intro and use cases. It also touched upon different ways of writing TensorFlow models and training loops.
YouTube video Neural Networks, with a practical example written in JAX, probably the first JAX techtalk in Portuguese by João Guilherme Madeia Araújo (Brazil).
A lot of Keras examples were contributed by Sayak Paul from India and listed below are some of these examples.
Notebook “Simple Bayesian Ridge with Sentence Embeddings” by Ertuğrul Demir (Turkey) about a natural language processing task using BERT finetuning followed by simple linear regression on top of sentence embeddings generated by transformers.
Youhan Lee from Korea gave a talk about “Learning machine learning and TensorFlow with Kaggle competition”. He explained how to use the Kaggle platform for learning ML.
Advances in machine learning and deep learning research are changing our technology, and many ML GDEs are interested and contributing.
Karim Beguir (UK) co-authored a paper with the DeepMind team covering a novel compositional approach using Deep Reinforcement Learning to solve robotics manipulation tasks. The paper was accepted in the NeurIPS workshop.
Finally, Sayak Paul from India, together with Pin-Yu Chen, published a research paper, “Vision Transformers are Robust Learners,” covering the robustness of the Vision Transformer (ViT) against common corruptions and perturbations, distribution shifts, and natural adversarial examples.
If you want to know more about the Google Experts community and their global open-source ML contributions, please check the GDE Program website, visit the GDE Directory and connect with GDEs on Twitter and LinkedIn. You can also meet them virtually on the ML GDE’s YouTube Channel!