Published on

Mar 26, 2024

Published on

Mar 26, 2024

Published on

Mar 26, 2024

Published on

Mar 26, 2024

Content Adversarial Red Team Manager, Trust and Safety

Content Adversarial Red Team Manager, Trust and Safety

Content Adversarial Red Team Manager, Trust and Safety

Content Adversarial Red Team Manager, Trust and Safety

Full-time

/

Seattle, WA

/

On-site

Full-time

/

Seattle, WA

/

On-site

Full-time

/

Seattle, WA

/

On-site

Full-time

/

Seattle, WA

/

On-site

About the job

The application window will be open until at least April 1, 2024. This opportunity will remain online based on business needs which may be before or after the specified date.

Note: By applying to this position you will have an opportunity to share your preferred working location from the following:Seattle, WA, USA; Atlanta, GA, USA; Austin, TX, USA; Boulder, CO, USA; Washington D.C., DC, USA.Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.

  • 10 years of experience in technology, red teaming, policy, cybersecurity, anti-abuse, Trust and Safety, or related fields.

  • 1 year of experience in people management, leading a team or teams.


Preferred qualifications:

  • Experience with adversarial testing of online consumer products.

  • Experience coordinating on complex projects across functions and teams.

  • Experience setting up playbooks for repeatable T&S operations or program management experience in a T&S environment.

  • Experience measuring impact in hard-to-define topic areas.

  • Experience in data analysis and working with large datasets, and/or LLMs.

  • Ability to work non-standard hours as needed to support escalations.


About The Job

As a Manager in Trust and Safety (T&S), you lead a team responsible for protecting Google and its users by fighting abuse and fraud for at least one Google product. You ensure trust and reputation not only for this product, but also for Google as a broader brand and company. You are a strategic leader who possesses the ability to work globally and cross-functionally with several internal stakeholders through effective relationship building, influence and communication. You demonstrate analytical thinking through data-driven decisions. You have the technical know-how, charisma and ability to work with your team to make a big impact.

The Content Adversarial Red Team (CART) in Trust & Safety Intelligence is a new team that will use unstructured persona-based adversarial testing techniques to identify ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products. The CART team will work alongside product, policy and enforcement teams to proactively detect harm patterns and help build the safest possible experiences for Google users.

This role will be exposed to graphic, controversial, and/or upsetting content.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $156,000-$234,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Manage a team of analysts. Set up processes that ensure CART findings are systematically given to the appropriate teams for review and mitigation. Create and track impact metrics that comprehensively capture the risk from the vulnerabilities detected by CART.

  • Develop playbooks for the adversarial personas that CART will deploy based on individual research and consultation with internal and external experts.

  • Create a set of standardized guidelines that can make the team’s testing and outputs reliably valuable without stifling the creativity required by red teaming. Monitor and research emerging abuse vectors for generative AI from open web and specialized sources.

  • Apply insights for creative prompting of Google generative AI tools like Gemini, Search Generative Experience, and Vertex API.



Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

About the job

The application window will be open until at least April 1, 2024. This opportunity will remain online based on business needs which may be before or after the specified date.

Note: By applying to this position you will have an opportunity to share your preferred working location from the following:Seattle, WA, USA; Atlanta, GA, USA; Austin, TX, USA; Boulder, CO, USA; Washington D.C., DC, USA.Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.

  • 10 years of experience in technology, red teaming, policy, cybersecurity, anti-abuse, Trust and Safety, or related fields.

  • 1 year of experience in people management, leading a team or teams.


Preferred qualifications:

  • Experience with adversarial testing of online consumer products.

  • Experience coordinating on complex projects across functions and teams.

  • Experience setting up playbooks for repeatable T&S operations or program management experience in a T&S environment.

  • Experience measuring impact in hard-to-define topic areas.

  • Experience in data analysis and working with large datasets, and/or LLMs.

  • Ability to work non-standard hours as needed to support escalations.


About The Job

As a Manager in Trust and Safety (T&S), you lead a team responsible for protecting Google and its users by fighting abuse and fraud for at least one Google product. You ensure trust and reputation not only for this product, but also for Google as a broader brand and company. You are a strategic leader who possesses the ability to work globally and cross-functionally with several internal stakeholders through effective relationship building, influence and communication. You demonstrate analytical thinking through data-driven decisions. You have the technical know-how, charisma and ability to work with your team to make a big impact.

The Content Adversarial Red Team (CART) in Trust & Safety Intelligence is a new team that will use unstructured persona-based adversarial testing techniques to identify ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products. The CART team will work alongside product, policy and enforcement teams to proactively detect harm patterns and help build the safest possible experiences for Google users.

This role will be exposed to graphic, controversial, and/or upsetting content.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $156,000-$234,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Manage a team of analysts. Set up processes that ensure CART findings are systematically given to the appropriate teams for review and mitigation. Create and track impact metrics that comprehensively capture the risk from the vulnerabilities detected by CART.

  • Develop playbooks for the adversarial personas that CART will deploy based on individual research and consultation with internal and external experts.

  • Create a set of standardized guidelines that can make the team’s testing and outputs reliably valuable without stifling the creativity required by red teaming. Monitor and research emerging abuse vectors for generative AI from open web and specialized sources.

  • Apply insights for creative prompting of Google generative AI tools like Gemini, Search Generative Experience, and Vertex API.



Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

About the job

The application window will be open until at least April 1, 2024. This opportunity will remain online based on business needs which may be before or after the specified date.

Note: By applying to this position you will have an opportunity to share your preferred working location from the following:Seattle, WA, USA; Atlanta, GA, USA; Austin, TX, USA; Boulder, CO, USA; Washington D.C., DC, USA.Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.

  • 10 years of experience in technology, red teaming, policy, cybersecurity, anti-abuse, Trust and Safety, or related fields.

  • 1 year of experience in people management, leading a team or teams.


Preferred qualifications:

  • Experience with adversarial testing of online consumer products.

  • Experience coordinating on complex projects across functions and teams.

  • Experience setting up playbooks for repeatable T&S operations or program management experience in a T&S environment.

  • Experience measuring impact in hard-to-define topic areas.

  • Experience in data analysis and working with large datasets, and/or LLMs.

  • Ability to work non-standard hours as needed to support escalations.


About The Job

As a Manager in Trust and Safety (T&S), you lead a team responsible for protecting Google and its users by fighting abuse and fraud for at least one Google product. You ensure trust and reputation not only for this product, but also for Google as a broader brand and company. You are a strategic leader who possesses the ability to work globally and cross-functionally with several internal stakeholders through effective relationship building, influence and communication. You demonstrate analytical thinking through data-driven decisions. You have the technical know-how, charisma and ability to work with your team to make a big impact.

The Content Adversarial Red Team (CART) in Trust & Safety Intelligence is a new team that will use unstructured persona-based adversarial testing techniques to identify ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products. The CART team will work alongside product, policy and enforcement teams to proactively detect harm patterns and help build the safest possible experiences for Google users.

This role will be exposed to graphic, controversial, and/or upsetting content.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $156,000-$234,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Manage a team of analysts. Set up processes that ensure CART findings are systematically given to the appropriate teams for review and mitigation. Create and track impact metrics that comprehensively capture the risk from the vulnerabilities detected by CART.

  • Develop playbooks for the adversarial personas that CART will deploy based on individual research and consultation with internal and external experts.

  • Create a set of standardized guidelines that can make the team’s testing and outputs reliably valuable without stifling the creativity required by red teaming. Monitor and research emerging abuse vectors for generative AI from open web and specialized sources.

  • Apply insights for creative prompting of Google generative AI tools like Gemini, Search Generative Experience, and Vertex API.



Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

About the job

The application window will be open until at least April 1, 2024. This opportunity will remain online based on business needs which may be before or after the specified date.

Note: By applying to this position you will have an opportunity to share your preferred working location from the following:Seattle, WA, USA; Atlanta, GA, USA; Austin, TX, USA; Boulder, CO, USA; Washington D.C., DC, USA.Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.

  • 10 years of experience in technology, red teaming, policy, cybersecurity, anti-abuse, Trust and Safety, or related fields.

  • 1 year of experience in people management, leading a team or teams.


Preferred qualifications:

  • Experience with adversarial testing of online consumer products.

  • Experience coordinating on complex projects across functions and teams.

  • Experience setting up playbooks for repeatable T&S operations or program management experience in a T&S environment.

  • Experience measuring impact in hard-to-define topic areas.

  • Experience in data analysis and working with large datasets, and/or LLMs.

  • Ability to work non-standard hours as needed to support escalations.


About The Job

As a Manager in Trust and Safety (T&S), you lead a team responsible for protecting Google and its users by fighting abuse and fraud for at least one Google product. You ensure trust and reputation not only for this product, but also for Google as a broader brand and company. You are a strategic leader who possesses the ability to work globally and cross-functionally with several internal stakeholders through effective relationship building, influence and communication. You demonstrate analytical thinking through data-driven decisions. You have the technical know-how, charisma and ability to work with your team to make a big impact.

The Content Adversarial Red Team (CART) in Trust & Safety Intelligence is a new team that will use unstructured persona-based adversarial testing techniques to identify ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products. The CART team will work alongside product, policy and enforcement teams to proactively detect harm patterns and help build the safest possible experiences for Google users.

This role will be exposed to graphic, controversial, and/or upsetting content.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $156,000-$234,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Manage a team of analysts. Set up processes that ensure CART findings are systematically given to the appropriate teams for review and mitigation. Create and track impact metrics that comprehensively capture the risk from the vulnerabilities detected by CART.

  • Develop playbooks for the adversarial personas that CART will deploy based on individual research and consultation with internal and external experts.

  • Create a set of standardized guidelines that can make the team’s testing and outputs reliably valuable without stifling the creativity required by red teaming. Monitor and research emerging abuse vectors for generative AI from open web and specialized sources.

  • Apply insights for creative prompting of Google generative AI tools like Gemini, Search Generative Experience, and Vertex API.



Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

About Company

A problem isn't truly solved until it's solved for all. Googlers build products that help create opportunities for everyone, whether down the street or across the globe. Bring your insight, imagination and a healthy disregard for the impossible. Bring everything that makes you unique. Together, we can build for everyone.Check out our career opportunities at goo.gle/3DLEokh

Total Employees

279,871

Company 2-Year Growth

15%

Median Employee Tenure

4.3 years

Because no one goes to school
for fighting fraud.