Location: Pune, MH
Experience: 3 – 5 years
Department: Software Engineering – Data & AI
Employment Type: Full-Time
please send your resume to taniya@id4consultancy.com
About the Role
RevDau is building next-generation AI solution focused on Conversational Analytics, Enterprise AI Orchestration, and AI-powered Data Platforms.
We are seeking a strong Python Developer with hands-on experience in Generative AI, LLMs, RAG, and Elasticsearch to develop intelligent conversational systems and scalable back-end services.
This role is ideal for engineers passionate about building AI-powered user experiences, optimizing search relevance, and deploying production-grade AI applications.
Key Responsibilities
1. Backend Engineering (Python)
Design, develop, optimize, and maintain Python-based backend services and microservices.
Implement APIs to serve conversational insights, embeddings, analytics, and search results.
Integrate Python services with Elasticsearch, vector databases, cloud services, and LLM APIs.
2. Generative AI & Conversational Analytics
Build Conversational AI pipelines including:
LLM-based dialogue engines
RAG (Retrieval Augmented Generation) pipelines
Context orchestration & conversation memory
Multi-turn conversational flows
Implement embeddings, semantic search, and retrieval optimization.
Fine-tune or prompt-tune LLMs for specific enterprise use-cases.
Elasticsearch Engineering
Build indexing pipelines for structured & unstructured data.
Implement search queries, aggregations, filters, synonyms, analyzers.
Optimize search ranking, performance tuning, and index lifecycle management.
Integrate Elasticsearch with vector search / hybrid search techniques.
4. Data Engineering (Supporting)
Work with ETL pipelines for ingesting logs, documents, metrics, and user interactions.
Build text processing, NLP pipelines, chunking, metadata extraction, and enrichment.
5. Cloud & DevOps Collaboration
Deploy and scale Python and AI workloads on AWS / Azure / GCP.
Work with Docker, Kubernetes, and CI/CD pipelines for productionizing models and services.
6. Cross-functional Collaboration
Work closely with Product Managers, Data Scientists, LLM Engineers, and UI/UX teams.
Required Skills & Experience
Core Technical Skills
Strong Python programming skills with FastAPI
Production experience with Generative AI, LLMs, and Conversational AI.
Hands-on experience with:
OpenAI, Anthropic, Azure OpenAI, Llama, fine-tuning
Vector embeddings (SentenceTransformers, OpenAI embeddings, HuggingFace models)
RAG architecture and semantic search pipelines
Advanced Elasticsearch expertise:
Designing indexes, custom analyzers, synonyms
Search templates, aggregations, query optimization
ES cluster performance, scaling, shard/query tuning
Experience with ES vector search / ELSER / hybrid search
Desirable Skills
Experience with LangChain, LlamaIndex, Haystack, or custom retrieval pipelines.
Familiarity with Kafka, Spark, Airflow, or distributed data pipelines.
Experience with front-end integration for conversational UIs (optional).
Knowledge of MLOps, model serving, or vector databases (Pinecone, Qdrant, Weaviate).
Soft Skills
Problem-solving and analytical mindset.
Strong communication and ability to explain technical concepts.
Ownership-driven and self-motivated.
Ability to work in agile, fast-paced product teams.
Education
B.Tech/M.Tech in Computer Science, Engineering, or equivalent field.
Certifications in AI/ML, cloud, NLP, or Elasticsearch are a plus.
What We Offer
Opportunity to build cutting-edge AI solutions used by global enterprises.
Work with latest LLMs, GenAI frameworks, vector search, and conversational AI stacks.
Fast-growth environment with autonomy and ownership.
Flexible/Hybrid work model.