About the Role

Research, Vision Expertise

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

Thinking Machines builds multimodal-first. We’re looking for new team members to advance the science of visual perception and multimodal learning. We think about how vision and language interact at scale. We design architectures that fuse pixels and text, build datasets and evaluation methods that test real-world comprehension, and develop representations that let models ground abstract concepts in the physical world. Our goal is to create multimodal systems that support seamless integration into real-world environments.

You’ll work at the intersection of visual understanding, multimodal reasoning, and large-scale model training. You’ll help develop the architectures, data, and evaluation tools that teach AI to see, understand, and collaborate. The best candidate is curious about multimodal interfaces, has experience running large scale experiments and is comfortable contributing to complex engineering systems. While we are looking for a person with expertise in multimodality, Thinking Machines Lab operates in a unified fashion and expects new hires to work across modalities as one team.

This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do

• Own research projects on training and performance analysis of multimodal AI models.

• Curate and build large-scale datasets and evaluation benchmarks to advance vision capabilities.

• Work with our data infrastructure engineers, pretraining researchers and engineers, and product team to create frontier multimodal models and the products that leverage them.

• Publish and present research that moves the entire community forward. Share code, datasets, and insights that accelerate progress across industry and academia.

Skills and Qualifications

Minimum qualifications:

• Ability to design, run, and analyze experiments thoughtfully, with demonstrated research judgment and empirical rigor.

• Understanding of machine learning fundamentals, large-scale training, and distributed compute environments.

• Proficiency in Python and familiarity with at least one deep learning framework (e.g., PyTorch, TensorFlow, or JAX). Comfortable with debugging distributed training and writing code that scales.

• Bachelor’s degree or equivalent experience in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding.

• Clarity in communication, an ability to explain complex technical concepts in writing.

Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:

• Research or engineering contributions in visual  reasoning, spatial understanding, or multimodal architecture design.

• Experience developing evaluation frameworks for multimodal tasks.

• Publications or open-source contributions in vision-language modeling, video understanding, or multimodal AI.

• A strong grasp of probability, statistics, and ML fundamentals. You can look at experimental data and distinguish between real effects, noise, and bugs.

• PhD in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding; or, equivalent industry research experience.

Logistics

• Location: This role is based in San Francisco, California. 

• Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.

• Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.

• Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

About the Company

Thinking Machines

Thinking Machines Lab is an artificial intelligence research and product company focused on making advanced AI systems more accessible, customizable, and capable for a wide range of users. The company is dedicated to bridging the gap between rapidly advancing AI capabilities and the broader scientific community’s understanding of these systems. Their team includes scientists and engineers who have contributed to some of the most widely used AI products and open-source projects, such as ChatGPT, Character.ai, Mistral, PyTorch, OpenAI Gym, Fairseq, and Segment Anything. Thinking Machines Lab emphasizes building AI that works collaboratively with humans, prioritizing flexibility, adaptability, and personalization to support a broad spectrum of real-world applications.

A key aspect of the company’s philosophy is a commitment to open science and community collaboration. Thinking Machines Lab frequently publishes technical blog posts, papers, and code, believing that sharing knowledge accelerates both public benefit and internal research culture. The company values solid infrastructure, empirical approaches to AI safety, and the co-design of research and products to ensure their work delivers genuine value. For potential team members, Thinking Machines Lab offers the opportunity to work at the forefront of AI innovation, contribute to impactful open-source projects, and help shape the future of human-AI collaboration in a supportive, mission-driven environment.
More roles from
Thinking Machines
Department
Location
Thinking Machines

Research, Vision Expertise

Type
full-time
Department
Research
Location
San Francisco
Salary
Apply Now