As a Principal Data Scientist / Developer, you’ll play a key role in building and owning the end-to-end data foundation at Jotelulu. Your mission will be to transform operational and product telemetry into trustworthy, timely, and business-ready data, enabling faster decision-making, rigorous experimentation, and sustainable growth across the company.
You’ll be part of a cross-functional environment, working hands-on while helping shape the next-generation data platform. This includes designing batch and streaming architectures, defining metrics and data contracts, and building models that support activation, expansion, and reliability. Reporting to the COO, you will also contribute to coaching others and translating business needs into scalable data solutions.
Collaboration will be essential. You’ll work closely with Product, Engineering, GTM, Operations, and Partner Success teams to prioritize business questions, design experiments, and drive impact through data. You will also collaborate with Security teams to ensure data governance, compliance, and responsible data usage across the organization.
Requirements that are important for us
We are looking for a Principal Data Scientist / Developer with strong experience across data, engineering, and analytics, capable of owning the full data lifecycle and delivering measurable business impact.
Relevant experience and expected outcomes:
- 8+ years of experience across Data Science, Analytics, and Engineering, including leading initiatives and delivering data products.
- Designing and managing end-to-end data infrastructures, from ingestion to serving (batch and streaming).
- Building and optimizing ETL/ELT pipelines, orchestration processes, and workflows for structured and unstructured data.
- Developing and deploying analytical, ML, and LLM-based models.
- Defining and maintaining core metrics, semantic layers, and data models aligned with business objectives.
- Implementing data quality, observability, governance, and CI/CD practices for data systems.
- Collaborating with cross-functional teams to translate business needs into data-driven solutions.
Key skills and expected impact:
- Strong expertise in Python (preferred) and/or R, SQL and NoSQL databases, and data modeling.
- Ability to design scalable data architectures, including lakehouse patterns and feature stores.
- Experience with ML, LLMs, experimentation, and statistical analysis.
- Capability to turn insights into actionable outcomes, influencing product and business decisions.
- Strong communication and storytelling skills to translate data into business impact.
- Proactive mindset with ownership of the full data lifecycle, from ingestion to serving and observability.
- Ability to simplify complex systems and make data accessible and usable across teams.
- Commitment to data quality, integrity, and responsible use of AI.
Tools
- Programming and data stack: Python, Pandas, Polars, DuckDB, scikit-learn.
- Deep learning and AI: PyTorch, TensorFlow, HuggingFace.
- Data visualization: Power BI, Tableau, Looker, Dash/Plotly.
- APIs and services: Flask, FastAPI.
- Infrastructure and engineering: Linux, bash, Git, containerization.
- Data systems: SQL, NoSQL, ETL/ELT pipelines, orchestration tools.
- Collaboration tools: Slack, Microsoft O365, Jira, Confluence.
- Cloud platforms and data services.
- Security and governance practices aligned with ISO 27001, ENS, and HDS.
