AI Jobs
Find the latest job opportunities in AI and tech
Find the latest job opportunities in AI and tech
Find the latest job opportunities in AI and tech
Engineering Manager, Core Machine Learning, Google Cloud
Photomath is a mobile app providing step-by-step solutions to math problems, from arithmetic to calculus, with a freemium pricing model.
Education Requirements:
Bachelor's degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience
Experience Requirements:
8 years of experience with software development in C++ programming language
5 years of experience with machine learning algorithms and tools (e.g., TensorFlow), artificial intelligence, deep learning, or natural language processing
5 years of experience with design and architecture; and testing/launching software products
Show more details
Senior C++ Engineer
Coram AI is a cloud-based video security platform using AI for enhanced search, proactive alerts, and scalable management of security cameras.
Benefits:
Medical, Dental, and Vision insurance
Company equity % in an early-stage startup
100% company-paid private Dental & Vision insurance
Experience Requirements:
3+ years of experience writing production software in C++ and Python of experience building applications processing real-time data and optimizing them for latency and memory.
Experience using various profiling tools (e.g., gdb, Nsight, Valgrind, flame graph) to optimize the code.
Experience with Docker, CI / CD pipelines.
It would be great if you also have experience with one or more * Edge/IoT computing (we have a fleet of deployed edge computers). * infrastructure management (we use Salt). * monitoring (we use Grafana). * video processing & streaming (we use Gstreamer). * experience interfacing ML models (we use PyTorch).
High intrinsic motivation to succeed and ability to work hard
Responsibilities:
Building edge applications processing vision data and communication layers for the compute-constrained edge devices.
Deploying machine learning models to production.
Optimizing the platform runtime for maximum performance. This is largely C++ code with parts of the pipeline running on GPU.
Building observability and telemetry
Show more details
Senior Full Stack Engineer
Coram AI is a cloud-based video security platform using AI for enhanced search, proactive alerts, and scalable management of security cameras.
Benefits:
Medical, Dental, and Vision insurance
Company equity % in an early-stage startup
100% company-paid private Dental & Vision insurance
Experience Requirements:
3+ years of industry experience and having built systems of reasonable scale
You should be familiar and have experience with the main backend technologies we use, * Knowledge of either Python or Go * Experience with databases Postgres, Redis..
You should be familiar and have experience with front-end technologies we use (Typescript, React, react-query, build tools, React component libraries)
It would be great if you also have experience with one or more * Infrastructure-as-code solutions (we use Pulumi over AWS). * Mobile development (we use React-native). * Any experience with platform engineering (C++), or video streaming is a strong bonus but not required.
High intrinsic motivation to succeed and ability to work hard
Responsibilities:
Design and implementation of new backend APIs and working with the edge team.
Implementing user-facing front-end interfaces and working with the product team.
Ensure the system can scale and have full observability.
Delivering high-quality features, testing them well, and debugging issues
Show more details
Account Executive - Sunnyvale, CA (HQ)
Coram AI is a cloud-based video security platform using AI for enhanced search, proactive alerts, and scalable management of security cameras.
Benefits:
Medical, Dental, and Vision insurance
401k (No matching)
Competitive Compensation Package (OTE Salary and Equity)
Experience Requirements:
At least 2-3 years of experience in B2B technical sales and a quota-carrying role
Excellent with pricing negotiation and showing the product value to the customer
Successful track record of net new accounts that are self-sourced
Hunter ability to creatively generate leads; SDR Experience Preferred
Proficient with ZoomInfo, Linkedin Sales Navigator, and HubSpot
Other Requirements:
Proven track record of above target performance
Ability to thrive in challenging and fast paced environments
Responsibilities:
Consistently achieve revenue targets and meet performance metrics
Identify opportunities for growth and generate leads through strategic prospecting
Full Cycle Sales: Prospect, Maintain, Develop and close net new accounts
Heavy outreach: cold call and email sequences target ICP accounts; write personalized sequences that convert and get demos
Become an expert on the Coram AI platform and independently give demos to prospects
Show more details
Founding Product Manager
Coram AI is a cloud-based video security platform using AI for enhanced search, proactive alerts, and scalable management of security cameras.
Benefits:
Medical, Dental, and Vision insurance
401k (No matching)
Top-of-market pay (base and equity)
Education Requirements:
BS in Computer Science
Experience Requirements:
Minimum 2 Years of Product Management experience
Product management experience from an early-stage technology startup preferred
Excited about working hard in a fast-moving startup
Excellent communication and leadership abilities
Responsibilities:
Work closely with the CEO on the product roadmap.
Own product features end-to-end: collaborate with the engineering team and product designer to deliver features of the highest quality and at high velocity.
Continuously test the product, find bugs, and proactively suggest improvements.
Partner with sales and end customers to identify gaps in the product and prioritize them with the engineering team.
Become an expert in competitor offerings, analyze all the reasons we may lose a deal, and improve the product to close the gaps
Show more details
Product Marketing Manager
Coram AI is a cloud-based video security platform using AI for enhanced search, proactive alerts, and scalable management of security cameras.
Benefits:
Medical, Dental, and Vision insurance
401k (No matching)
Top-of-market pay (Base and Equity)
Education Requirements:
BS in engineering
Experience Requirements:
Product marketing experience at an early stage technology startup
Excited about working hard in a fast-moving startup (often requiring 60+ hours a week)
Excellent communication and cross-functional collaboration skills.
Experience working with distributed teams across time zones
Other Requirements:
BS in engineering or a way to demonstrate the ability to understand a technically deep product
Responsibilities:
Own product messaging across the website, email, and social channels.
Take end-to-end ownership of the website, working closely with designers and web developers.
Deeply understand the Coram product and competitive landscape, positioning Coram to stand out from the competition and clearly communicating the product value proposition to target audiences.
Own sales collateral, battle cards, and documentation needed by the sales team to win deals.
Collaborate closely with sales leadership and demand generation to provide content that grows top-of-funnel and helps close deals
Show more details
Sales Operations Manager
Coram AI is a cloud-based video security platform using AI for enhanced search, proactive alerts, and scalable management of security cameras.
Benefits:
Medical, Dental, and Vision insurance
401k (No matching)
Competitive Compensation Package (Salary and Equity)
Experience Requirements:
4+ years of experience in sales operations, revenue operations, or a similar role.
High intrinsic motivation and desire to work hard
Strong understanding of sales processes and CRM tools (e.g., HubSpot, Salesforce).
Excellent analytical skills with a proven ability to work with large datasets.
Experience creating dashboards and reports to track key sales metrics.
Other Requirements:
Effective communicator with strong organizational and problem-solving skills
Responsibilities:
Manage end-to-end sales process in our CRM
Be the admin of the CRM (we currently use Hubspot)
Build workflows to automate part of the sales process
Build dashboards to report key metrics and derive insights for the sales team
Play a key role in onboarding new AEs and enable them with the right tools
Show more details
Engineering Manager, Data Infrastructure (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
8-10+ years of software engineering experience in Java, 5+ years working on data systems and possess deep technical knowledge and skills, staying current with the latest technologies, frameworks, and best practices.
4+ years of experience leading teams building data systems, have a track record of successfully leading and mentoring teams.
You excel in project management, with the ability to plan, execute, and deliver projects on time and within budget.
You are an effective communicator, capable of conveying technical concepts to non-technical stakeholders and ensuring that your team is aligned with the company’s vision and goals.
You are a strategic thinker and a problem solver, always looking for ways to improve processes and outcomes
Responsibilities:
As an engineering leader in the Data Infrastructure team, you will create the vision for and enable the team to build and productionize the next generation of our data tech stack.
Balance a mix of people management and technical hands-on work, with an eye towards building for the long term by setting a high bar for technical design, code quality and developing our engineers into high performing, technical leaders that are skilled and excited to build the next generation data stack.
Accelerate our open source <> enterprise flywheel by working on the guts of Apache Hudi's transactional engine and optimizing it for diverse Onehouse customer workloads.
Show more details
Data Infrastructure Engineer (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
Strong, object-oriented design and coding skills (Java and/or C/C++ preferably on a UNIX or Linux platform).
Experience with inner workings of distributed (multi-tiered) systems, algorithms, and relational databases.
You embrace ambiguous/undefined problems with an ability to think abstractly and articulate technical challenges and solutions.
An ability to prioritize across feature development and tech debt with urgency and speed.
An ability to solve complex programming/optimization problems
Responsibilities:
Design new concurrency control and transactional capabilities that maximize throughput for competing writers.
Design and implement new indexing schemes, specifically optimized for incremental data processing and analytical query performance.
Design systems that help scale and streamline metadata and data access from different query/compute engines.
Solve hard optimization problems to improve the efficiency (increase performance and lower cost) of distributed data processing algorithms over a Kubernetes cluster.
Leverage data from existing systems to find inefficiencies, and quickly build and validate prototypes.
Show more details
Staff Software Engineer, Open Source (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
10+ years building large-scale data systems.
3+ years experience leading teams and working with others to deliver on team goals.
You embrace ambiguous/undefined problems with an ability to think abstractly and articulate technical challenges and solutions.
Positive attitude towards seeking solutions to hard problems, with a bias towards action and forward progress.
An ability to quickly prototype new directions, shape them into real projects and analyze large/complex data.
Responsibilities:
Lead the team building Apache Hudi to design and deliver features/improvements.
Ensure high quality and timely delivery of innovations and improvements in Apache Hudi.
Dive deep into the architectural details of data ingestion, data storage, data processing and data querying to ensure that Apache Hudi is built to be the most robust, scalable and interoperable data lakehouse.
Own discussions and work with open source partners/vendors to: troubleshoot issues with Hudi, ensure Hudi support in for compute engines like Pretso/Trino and act as the face of Hudi to the community at large via meetups, customer meetings, talks etc.
Partner with and mentor engineers on the team
Show more details
Software Engineer (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
3+ years of experience as a software engineer with experience developing distributed systems.
Strong, object-oriented design and coding skills with Java.
Experience with inner workings of distributed (multi-tiered) systems, algorithms, and relational databases.
Deal well with ambiguous/undefined problems; ability to think abstractly; articulate technical challenges and solutions.
Speed and hustle → Ability to prioritize across feature development and tech debt
Responsibilities:
Build systems that enable users to manage petabytes of data with a fully managed cloud service.
Build functionality that enables data systems to be cloud native (self managed), scalable (auto scaling) and secure (different levels of access control).
Build scalable job management on Kubernetes to ingest, store, manage and optimize petabytes of data on cloud storage.
Design systems that help scale and streamline metadata and data access from different query/compute engines.
Exhibit full ownership of product features, including design and implementation, from concept to completion
Show more details
Senior Software Engineer, Open Source (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
5-7+ years building large-scale data systems.
You embrace ambiguous/undefined problems with an ability to think abstractly and articulate technical challenges and solutions.
Positive attitude towards seeking solutions to hard problems, with a bias towards action and forward progress.
An ability to quickly prototype new directions, shape them into real projects and analyze large/complex data.
Strong, object-oriented design and coding skills with Java, preferably on a UNIX or Linux platform.
Responsibilities:
Build, design and deliver features/improvements to Apache Hudi.
Ensure high quality and timely delivery of innovations and improvements in Apache Hudi.
Dive deep into the architectural details of data ingestion, data storage, data processing and data querying to ensure that Apache Hudi is built to be the most robust, scalable and interoperable data lakehouse.
Own discussions and work with open source partners/vendors to: troubleshoot issues with Hudi, ensure Hudi support in for compute engines like Pretso/Trino and act as the face of Hudi to the community at large via meetups, customer meetings, talks etc.
Partner with and mentor engineers on the team
Show more details
Tech Lead Manager, Data Infrastructure (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
8-10+ years of software engineering experience in Java, 5+ years working on data systems and possess deep technical knowledge and skills, staying current with the latest technologies, frameworks, and best practices.
2+ years of experience leading teams building data systems, have a track record of successfully leading and mentoring teams.
You excel in project management, with the ability to plan, execute, and deliver projects on time and within budget.
You are an effective communicator, capable of conveying technical concepts to non-technical stakeholders and ensuring that your team is aligned with the company’s vision and goals.
You are a strategic thinker and a problem solver, always looking for ways to improve processes and outcomes
Responsibilities:
As an engineering leader in the Data Infrastructure team, you will create the vision for and enable the team to build and productionize the next generation of our data tech stack.
Balance a mix of people management and technical hands-on work, with an eye towards building for the long term by setting a high bar for technical design, code quality and developing our engineers into high performing, technical leaders that are skilled and excited to build the next generation data stack.
Accelerate our open source <> enterprise flywheel by working on the guts of Apache Hudi's transactional engine and optimizing it for diverse Onehouse customer workloads.
Show more details
Data Platform Engineer (US)
Onehouse is a fully managed cloud data lakehouse built on Apache Hudi that ingests data in minutes and supports all query engines.
Benefits:
Competitive Compensation
Equity Compensation
Health & Well-being
Financial Future
Location
Experience Requirements:
3+ years of experience in building and operating data pipelines in Apache Spark or Apache Flink.
2+ years of experience with workflow orchestration tools like Apache Airflow, Dagster.
Proficient in Java, Maven, Gradle and other build and packaging tools.
Adept at writing efficient SQL queries and trouble shooting query plans.
Experience managing large-scale data on cloud storage.
Responsibilities:
Be the thought leader around all things data engineering within the company - schemas, frameworks, data models.
Implement new sources and connectors to seamlessly ingest data streams.
Building scalable job management on Kubernetes to ingest, store, manage and optimize petabytes of data on cloud storage.
Optimize Spark or Flink applications to flexibly run in batch or streaming modes based on user needs, optimize latency vs throughput.
Tune clusters for resource efficiency and reliability, to keep costs low, while still meeting SLAs
Show more details
Data Engineering Intern
AI-powered asset management and predictive maintenance platform for the energy industry, increasing efficiency and reducing costs.
Education Requirements:
Currently enrolled or recently graduated with MS or PhD in Computer Science, Electrical Engineering, Statistics, or equivalent fields. Specialization in data engineering preferred.
Current BS or MS student or recent graduate. CS major is a plus
Currently Enrolled or graduated with MS or PhD in Computer Science, Electrical Engineering, Statistics, or equivalent fields. Specialization in machine learning preferred.
BS or MS in Computer Science or related field
Experience Requirements:
Strong understanding of data engineering principles using big data technologies
Expertise in relational databases (MSSQL/MySQL/Postgres) and expertise in SQL. Exposure to NoSQL such as Cassandra. MongoDB will be a plus
Experience with distributed computing using Hadoop and Spark
Exposure to deploying ETL pipelines such as AirFlow, AWS Data Pipeline, AWS Glue
Excellent programming skills in Java, SCALA or Python. Python is preferred. Experience using Tableau for data visualization will be a plus Preferred: Ability to stay-on for part-time or full-time as a co-op until May 2020 Ability to demonstrate a portfolio of projects (GitHub, papers, etc.)React/Redux, HTML5, CSS3, JavaScript, Python, Django and REST API’s Strong foundation in Computer Science, with deep knowledge of data structures, algorithms, and software design Experience with GIT, CI/CD tools, Atlassian software, AWS CodeDeploy, Lambda, Serverless a plus Experience with Elasticsearch/ELK stack a plus Contribute with ideas to overall product strategy and roadmap Self-starter to take ownership of the platform engineering and application development Work on multiple projects simultaneously and get things done. Take products from prototype to production Collaborate with other team members for continuous development Preferred: Ability to stay-on for part-time or full-time as a co-op until May 2020 Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) Strong understanding of machine learning algorithms & principles (regression analysis, time series, probabilistic models, supervised classification and unsupervised learning), and their application. Experience with Deep Learning algorithms such as Convolutional Neural Networks, Recurrent Neural Networks and LSTM. Familiarity with Deep Learning frameworks such as TensorFlow and PyTorch, and strong experience in at least one of those. Excellent programming skills in prototyping languages such as Python and R. Python is preferred. Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) will be a plus. Smart, motivated and can-do attitude is absolute must. Excellent verbal and written communication. React/Redux, HTML5, CSS3, JavaScript, Python, Django and REST API’s BS or MS in Computer Science or related field Strong foundation in Computer Science, with deep knowledge of data structures, algorithms, and software design Experience with GIT, CI/CD tools, Sentry, Atlassian software and AWS CodeDeploy a plus Contribute with ideas to overall product strategy and roadmap Improve codebase with continuous refactoring Self-starter to take ownership of the platform engineering and application development Work on multiple projects simultaneously and get things done. Take products from prototype to production Collaborate with team in Sunnyvale, CA to lead 24x7 product development
Responsibilities:
participating in developing and deploying fault-tolerant ETL pipelines, perform data visualization and enable the development of ML algorithms
participating in developing and deploying frontend/backend applications, creating vizualization dashboards and developing ways to integrate high frequency data data from devices onto our platform
participating in developing and deploying fault-tolerant ETL pipelines, perform data visualization and enable the development of ML algorithms
participating in developing and deploying frontend/backend applications, creating vizualization dashboards and developing ways to integrate high frequency data data from devices onto our platform
Show more details
Full Stack/IOT Intern
AI-powered asset management and predictive maintenance platform for the energy industry, increasing efficiency and reducing costs.
Education Requirements:
Current BS or MS student or recent graduate. CS major is a plus
Experience Requirements:
React/Redux, HTML5, CSS3, JavaScript, Python, Django and REST API’s
Strong foundation in Computer Science, with deep knowledge of data structures, algorithms, and software design
Experience with GIT, CI/CD tools, Atlassian software, AWS CodeDeploy, Lambda, Serverless a plus
Experience with Elasticsearch/ELK stack a plus
Contribute with ideas to overall product strategy and roadmap
Responsibilities:
participating in developing and deploying frontend/backend applications, creating vizualization dashboards and developing ways to integrate high frequency data data from devices onto our platform
Show more details
Data Science Intern
AI-powered asset management and predictive maintenance platform for the energy industry, increasing efficiency and reducing costs.
Education Requirements:
Currently Enrolled or graduated with MS or PhD in Computer Science, Electrical Engineering, Statistics, or equivalent fields. Specialization in machine learning preferred
Experience Requirements:
Strong understanding of machine learning algorithms & principles (regression analysis, time series, probabilistic models, supervised classification and unsupervised learning), and their application
Experience with Deep Learning algorithms such as Convolutional Neural Networks, Recurrent Neural Networks and LSTM
Familiarity with Deep Learning frameworks such as TensorFlow and PyTorch, and strong experience in at least one of those
Excellent programming skills in prototyping languages such as Python and R. Python is preferred
Responsibilities:
participating in developing and deploying fault-tolerant ETL pipelines, perform data visualization and enable the development of ML algorithms
Show more details
Senior Product Marketing
AI-powered customer support agent that reduces workflow and enhances customer experience.
Benefits:
Competitive pay
Insurance
Paid vacation
Work from home
Show more details
Software Engineer, Autonomy Platform
Figure 02 is an AI-powered, autonomous humanoid robot designed for commercial tasks in various industries, tackling labor shortages and workplace safety.
Education Requirements:
Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field
Experience Requirements:
Strong proficiency in both C++ and Python
Minimum of 4 years of experience designing flexible, performant software and interfaces for resource-constrained systems such as robots or mobile devices
Experience with Linux and development tools such as debuggers and performance profilers
Responsibilities:
Design, implement and maintain our on-robot software framework for executing, monitoring and testing the autonomy system on our humanoid robot
Collaborate with interdisciplinary robotics, firmware development, and infrastructure teams to identify autonomy framework system requirements and take the lead on satisfying them
Design and implement internal tools that accelerate development and expand software and hardware testing capabilities
Continuously raise the quality of our product by identifying gaps and advocating for improvements across the stack
Provide technical guidance and support to other team members, fostering a culture of excellence and innovation
Show more details
Robot Behavior Coordination Engineer
Figure 02 is an AI-powered, autonomous humanoid robot designed for commercial tasks in various industries, tackling labor shortages and workplace safety.
Experience Requirements:
Experience implementing, testing, and deploying behavior coordination solutions in C++ and/or Python on real robots
Capable of quickly writing massive amounts of high quality, well-tested, behavior coordination software
Possess both a theoretical understanding and have practical experience with behavior coordination algorithms
Have a deep knowledge of state of the art techniques, data structures, and software tools
Thrive in a high pace environment, where solutions are often unclear and require exploration
Other Requirements:
Experience in behavior coordination
Responsibilities:
Implement a robot behavior architecture that provides various behavior authoring tools, such as state machines and behavior trees
Using this architecture, design, implement, test, and deploy robot behavior coordination algorithms for humanoid robots for a large variety of mission scenarios.
Develop and use modern software engineering techniques to implement high quality, well-tested software
Evaluate potential behavior coordination solutions and make design trade offs and decisions based on robot requirements
Collaborate with other Figure team members to develop and implement a full autonomy stack
Show more details