Join our team at JediTeck and help us build the future of AI! We’re looking for passionate, innovative, and talented individuals who want to make a real impact in AI-driven solutions. If you’re excited about working in a dynamic environment where technology meets creativity, explore our open positions below.
Why Work at JediTeck?
-
•Innovative Environment: Be at the forefront of AI and machine learning technology.
-
•Growth Opportunities: Learn and grow with a cutting-edge team, solving real-world business problems.
-
•Collaborative Culture: Work alongside industry experts in a collaborative and open work environment.
-
•Flexible Working: Enjoy the flexibility to work remotely or in a hybrid setting.
-
•Competitive Benefits: We offer competitive salaries, health benefits, and opportunities for continuous learning.
1. Full Stack Developer (MERN & Python)
-
Location: Remote / Hybrid
Experience: 3+ years
Type: Full-Time
As a Full Stack Developer at Lynk, you’ll focus on building and maintaining both front-end and back-end components of our platform using the MERN stack and Python. You will collaborate with cross-functional teams to develop scalable, efficient, and high-performing applications, ensuring an intuitive user experience and robust back-end infrastructure.
Responsibilities:
• Design and develop both front-end and back-end applications using the MERN stack (MongoDB, Express, React, Node.js) and Python.
• Build and optimize RESTful APIs for performance, scalability, and security.
• Implement third-party service integrations and data pipelines.
• Work closely with product managers, UI/UX designers, and other engineers to deliver seamless user experiences.
• Write clean, efficient, and maintainable code that follows best practices for both the front and back end.
Skills & Qualifications:
• Strong proficiency with the MERN stack (MongoDB, Express, React, Node.js).
• Proficiency in Python, with experience building back-end services and APIs.
• Experience with databases (both SQL and NoSQL, especially MongoDB).
• Familiarity with cloud platforms (AWS, Google Cloud, or Azure) and containerization (Docker, Kubernetes).
•Strong problem-solving skills, attention to detail, and ability to work effectively in a team.
2. Data Scientist
-
Location: Remote / Hybrid
Experience: 2+ years
Type: Full-Time
As a Data Scientist at Lynk, you’ll be working on designing and developing machine learning models that power our AI-driven platform. You’ll collaborate with developers and engineers to analyze large datasets, build predictive models, and deliver actionable insights to solve real business problems.
Responsibilities:
• Analyze and process large datasets to identify patterns and trends.
• Design, train, and optimize machine learning models.
• Implement human-in-the-loop validation techniques for improved model accuracy.
• Work with the engineering team to integrate models into production environments.
• Collaborate with stakeholders to understand business objectives and deliver AI-driven solutions.
Skills & Qualifications:
• Strong programming skills in Python or R.
• Experience with machine learning frameworks (TensorFlow, PyTorch, or Scikit-learn).
• Proficiency in data visualization tools (Tableau, Power BI, or Matplotlib).
• Familiarity with SQL and data warehousing solutions.
•Knowledge of AI/ML algorithms, natural language processing, and deep learning techniques.
3.Data Engineer (Snowflake, FiveTran, Talend, Spark, Python, Kafka)
-
Location: Remote / Hybrid
Experience: 3+ years
Type: Full-Time
As a Data Engineer at Lynk, you’ll be responsible for designing and implementing scalable data pipelines that support our AI platform’s data ingestion, transformation, and real-time processing. You’ll work with a variety of modern data engineering tools like Snowflake, FiveTran, Talend, Spark, Python, and Kafka to ensure efficient data management and analytics.
Responsibilities:
• Design, build, and maintain robust ETL/ELT pipelines using Snowflake, FiveTran, and Talend.
• Develop data models and optimize Snowflake for analytics and AI use cases.
• Implement real-time data streaming and processing with Kafka.
• Utilize Apache Spark to process large datasets and ensure high performance.
• Collaborate with data scientists and software engineers to support machine learning model integration and real-time data needs.
• Monitor, troubleshoot, and optimize data pipelines to ensure data quality and performance.
• Manage cloud-based data infrastructure (AWS, GCP, or Azure).
Skills & Qualifications:
• Expertise in Snowflake for data warehousing and processing.
• Experience with FiveTran and Talend for building automated ETL pipelines.
• Proficiency in Kafka for real-time data streaming and processing.
• Strong programming skills in Python for data transformation and automation.
• Familiarity with Apache Spark for large-scale data processing.
• Knowledge of cloud platforms (AWS, GCP, or Azure) and distributed computing frameworks.
• Strong understanding of data modeling, pipeline design, and data quality practices.