top of page

Figure 13. Data Collection UI. Left: Unity-based UI. Middle: The operator selects the area where the object is to be picked up. Right: The operator can adjust the placement of the object and rotate it. The depth image around the pick-up point is overlaid and helps the operator align the placement

image-039.png

Figure 14. Autonomous Mode UI. User inputs the task to execute (by unique task code), and presses the play button to begin autonomous execution.

image-040.png

Images courtesy of Any Zeng' research paper

youtube-video-gif.gif
Google DeepMind Robotics Research Project
(Named Google Brain Robotics previously )
Platform: Windows Desktop
Engine: Unity

My Roles: UX Engineer, Simulation Engineer

 During my tenure at Google, I played a pivotal role in the development of Unity-based solutions geared towards enhancing user experiences and facilitating robotic arm movement planning. My responsibilities encompassed the entire project lifecycle, from ideation to execution, with a focus on utilizing the C# programming language and leveraging the Unity engine's integrated UI system.
 
One of my key achievements involved the creation of tailored user experiences that facilitated precise robotic arm movements, enabling tasks such as item manipulation and positioning to be executed flawlessly. I successfully delivered a series of prototypes and contributed to the enhancement of the visual design elements, particularly in relation to robotic trajectory safety systems. The safety system included end effector and world space geometry editing. These systems all received feedback from designers and engineers and researchers and were based on the output of different prototypes and user tests.
 
In addition to this, I spearheaded a separate project that involved the creation of a 1:1 simulation environment using Unity, mirroring the aforementioned robotic scenario. This in-house simulation system allowed us to simulate real-world robotic workstations, offering a valuable platform to test various client scenarios and validate their applicability in practical settings. To ensure the robustness of our development environment, I took the lead in establishing end-to-end integration testing procedures, which were executed using Unity's test runner in a headless cloud build configuration.
 
Through my contributions to these projects, I not only expanded the capabilities of Unity in the realm of robotics but also played a crucial role in validating and optimizing real-world applications, thereby enhancing the overall efficiency and effectiveness of the robotic UI client. My work exemplified my dedication to innovation, attention to detail, and ability to navigate complex technical challenges within the Unity development framework.

 

My Part in bullets:

• Collaborated as a UX Engineer to craft a comfortable and efficient mixed reality application for a user interface for robotics, that includes machine learning all while using Unity, C# for client dev, and UNIX for tools

​• Prototyped features in Unity which can better help user flow, developer tools, and client features in VR and desktop

• Owned development and support for regression client tests with Unity test runner on Google’s internal testing platform

• Lead simulation recreations and scenarios for potential customer engagements and demo testing and research/ML environments

• Architect client code using MVC/MVP architecture, Unit testing, and worked through the complete development lifecycle

Work in a tight loop with stakeholders and machine learning experts to develop user experiences in VR and in desktop, so that a robotics interface can adapt and learn with many types of robotic tasks while expanding on AI solutions.

I was a TVC at Google (Temp, Vendor, Contractor, not a google full-time employee) but worked full time with google employees onsite in a Google Brain Research team

Andy Zeng's published research paper showing a portion of the work I did on this team
bottom of page