Careers
Careers

job details

Back to jobs search

Jobs search results

2,615 jobs matched
Back to jobs search

Principal Engineering Analyst, Content Adversarial Red Team

GoogleMountain View, CA, USA

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 7 years of experience managing projects and defining project scope, goals, and deliverables.

Preferred qualifications:

  • Master's degree or PhD in a relevant quantitative or engineering field.
  • Experience with Large Language Models (LLMs), LLM Operations, prompt engineering, pre-training, and fine-tuning.
  • Ability to think strategically and identify emerging threats and vulnerabilities.
  • Ability to work separately and as part of a team.
  • Ability to influence cross-functionally at various levels and excellent communication and presentation skills.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

About the job

Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

We are seeking a pioneering expert in AI Red Teaming, with technical proficiency, to shape our approaches to adversarial testing of Google's generative AI products.

You will blend your domain expertise in GenAI red teaming and adversarial testing with technical acumen, driving creative and ambitious solutions to tests, ultimately preventing abusive content or uses of our products. You will demonstrate an ability to grow in a changing dynamic research and product development environment.

Combining red teaming and technical experience will enable you to design and direct operations, creating innovative methodologies to uncover novel content abuse risks, while supporting the team in the design, development and delivery of technical solutions to testing and process limitations. You will be a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams, driving initiatives.

You will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $174,000-$258,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Drive unstructured testing of novel model modalities and capabilities.
  • Bridge technical constraints and red teaming requirements by leading the design, development, and integration of novel platforms, tooling, and engineering solutions, supporting and scaling adversarial testing.
  • Design, develop, and oversee the execution of innovative and red teaming strategies, uncovering content abuse risks. Create and refine net new red teaming methodologies, strategies and tactics.
  • Lead and influence cross-functional teams, including Product, Engineering, Research, and Policy, driving the implementation of strategic safety initiatives. Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations.
  • Exposed to graphic, controversial, or upsetting content.

Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.

Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.

If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.

To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.

Google apps
Main menu