Senior Applied Scientist

Overview

On Site
USD 117,200.00 - 229,200.00 per year
Full Time

Skills

Bridging
FOCUS
Language models
Software development
Python
Java
Scala
C#
C
C++
Statistics
Computer science
Machine Learning (ML)
Natural language processing
Open source
Presentations
IC
Integrated circuit
Internal communications
Legal
Recruiting
Design
API
Generative Artificial Intelligence (AI)
Science
Research
Art
Modeling
Reporting
Leadership
Policies
Artificial intelligence
Microsoft
Use cases
Software deployment

Job Details

Are you seeking opportunities at the intersection of generative artificial in t elligence (AI), such as large-scale multimodal models like GPT-4o , and responsible AI, trust, and safety? Do you want to be a member of an interdisciplinary team of researchers, applied scientists, and software engineers? Do you embrace complex sociotechnical challenges? Microsoft Research is looking to hire a Senior Applied Scientist to join our team bridging responsible AI research and practice. The team focuses on sociotechnical alignment of generative AI systems, with a particular focus on measuring risks, including risk systematization, risk annotation, dataset creation, and metric design. As a member of this team, you'll develop central resources and tooling, you'll partner with Microsoft product teams who wish to be proactive about responsible AI, and you'll contribute to and/or drive research projects intended to advance the state-of-the-art. Successful candidates will have experience working with large-scale language models and/or multimodal models, and will be passionate about prioritizing diversity, inclusion, and fairness.

Qualifications

Required Qualifications:
  • Bachelor's degree in computer science or a related field (e.g., statistics, information science) and 4+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open- source contributions, or academic research (but not coursework).
  • OR Master's degree in computer science or a related field and 3+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
  • OR PhD in computer science or a related field and 1+ year(s) of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
  • OR equivalent experience.
  • Experience putting responsible AI principles (e.g., fairness, transparency) into practice.
  • Coding skills in a general-purpose coding language (e.g., Python, Java, Scala, C#, or C/C++).


Preferred Qualifications:
  • Master's degree in computer science or a related field (e.g., statistics, information science) and 6+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
  • OR PhD in computer science or a related field and 3+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
  • OR equivalent experience.
  • 3+ years of experience contributing to or driving peer-reviewed academic papers.
  • 1+ year(s) of experience presenting at conferences in the research or industry communities.
  • 3+ years of experience conducting research as part of an academic or industry research program.
  • 1+ year(s) of experience developing and deploying production systems, as part of a product team.
  • Experience working at the intersection of generative AI and responsible AI, trust, and safety.
  • Experience approaching AI risks as sociotechnical challenge s rather than purely algorithmic one s .
  • Experience working with products and services that incorporate generative AI.
  • Experience working productively on an interdisciplinary team spanning multiple roles.
  • Track record of prioritizing diversity and inclusion in the workplace.


Applied Sciences IC4 - The typical base pay range for this role across the U.S. is USD $117,200 - $229,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $153,600 - $250,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:

Microsoft will accept applications for the role until September 22, 2024.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form .

Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.

#Research

Responsibilities

  • You'll be a member of an interdisciplinary team of experts on AI risks.
  • You'll design and run experiments that use API calls to generative AI systems.
  • You'll monitor research advances and draw on them to shape your applied science work.
  • You'll contribute to and/or drive research projects intended to advance the state-of-the-art.
  • You'll learn new skills and apply them as needed: e.g., you might be asked to learn about measurement modeling and produce a report on a dataset's content validity .
  • You'll work with stakeholders with a variety of backgrounds in a variety of roles.
  • You'll present your work to internal and external stakeholders, including Microsoft leadership.
  • You'll develop central resources and tooling for identifying , measuring, and mitigating AI risks :
  • You'll work with policy and engineering teams to systematize AI risks .
  • You'll build and validate annotation guidelines and datasets for measuring AI risks.
  • You'll work with Microsoft product teams to generalize these resources and methods to be appropriate for a variety of systems, use cases, and deployment contexts.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.