Translation & Localization Platform — Integrations Team
Led third-party integration development and support on a cloud-based translation platform, built an MCP server, integrated AI features, and modernized legacy services from Java 8 to Java 21-24
Overview
A cloud-based translation and localization platform that helps organizations automate, manage, and optimize the translation of their digital content across languages and channels. As part of the Integrations team, I was responsible for building new integrations from scratch, maintaining and fixing existing ones, supporting clients with self-hosted connector issues, and contributing to platform modernization and AI feature development.
Problem
The platform's integration layer was a critical client-facing component with growing pain points. Existing integrations had accumulated bugs that caused client churn, new integration partners needed to be onboarded quickly, and clients running self-hosted connectors frequently needed hands-on support. Meanwhile, the core services were running on Java 8 with outdated dependencies carrying known CVEs, and the team needed to incorporate AI capabilities to stay competitive.
Constraints
- Zero-downtime migration required as the platform served global clients across all time zones
- Backward compatibility with existing REST APIs consumed by over 50 integration partners
- Client-facing integration fixes had to be prioritized alongside new feature development
- Self-hosted connector support required quick turnaround via Slack to maintain client satisfaction
Approach
I worked across multiple parallel streams: fixing existing integration bugs to reduce client churn, building new integrations from scratch for new content management platforms, providing ongoing Slack-based support for clients running self-hosted connectors, contributing to the Java 8 to 21-24 migration effort, building an MCP server with multiple tools for internal use, and integrating AI capabilities using Langchain4j and n8n.
Key Decisions
Built an MCP (Model Context Protocol) server with multiple tools for the company
An MCP server allowed various internal tools and AI agents to interact with the platform's APIs in a structured way, enabling automated workflows and reducing manual intervention for routine integration tasks.
- Custom REST API wrappers for each tool individually
- OpenAPI-based tool generation without MCP abstraction
Implemented multi-level caching with local Caffeine + distributed Redis layers
Translation memory lookups were a primary bottleneck in integration response times. A two-tier cache reduced redundant database queries while keeping cache invalidation manageable through Kafka-based event propagation.
- Single Redis cache layer with aggressive TTLs
- Database-level query caching with materialized views
Migrated CI/CD from Jenkins to GitHub Actions with Terraform-managed infrastructure
GitHub Actions provided tighter integration with our repository workflow, and Terraform enabled reproducible infrastructure provisioning across staging and production environments on AWS ECS.
- GitLab CI/CD with dedicated runners
- AWS CodePipeline with CloudFormation
Tech Stack
- Java 8-24
- Kotlin
- Spring Boot 2-3
- AWS (ECS, RDS, SQS, Lambda)
- Apache Kafka
- GitHub Actions
- Terraform
- Langchain4j
- n8n
Result & Impact
- Measurably reduced churn through integration bug fixes and improved support response timesClient Churn Reduction
- 23 known CVEs across 6 services remediated during Java migrationVulnerabilities Eliminated
- Multiple integrations built from scratch and delivered to productionNew Integrations
- 92% on translation memory lookups after caching optimizationCache Hit Rate
The integration fixes and hands-on client support directly improved client retention. Building integrations from scratch expanded the platform's ecosystem reach. The MCP server became a key enabler for internal AI-powered workflows. The Java modernization brought the platform onto a secure, maintainable foundation, and adopting Langchain4j opened up AI-driven quality assessment and content processing capabilities.
Learnings
- Client-facing integration work requires balancing speed of response with thoroughness — quick Slack-based support builds trust, but root-cause fixes prevent recurring issues
- Building integrations from scratch gives deep insight into third-party API design patterns, both good and bad
- MCP servers provide a powerful abstraction for making platform capabilities accessible to AI agents and automated workflows
- Multi-level caching requires careful invalidation design upfront. Retrofitting cache coherence across Kafka consumers was the hardest part of the optimization work
Technical Deep Dive
The integration work was the core of my role on this project. I was fully responsible for several integrations built from scratch — each involving deep dives into third-party APIs, designing robust sync mechanisms, handling webhook delivery and retry logic, and ensuring data consistency between the translation platform and the client’s content management system. In parallel, I worked through a backlog of bugs in existing integrations that were directly contributing to client churn. These ranged from subtle data mapping issues to race conditions in concurrent webhook processing. Fixing them required careful investigation and often coordination with the third-party API vendors themselves.
A significant part of my daily work involved supporting clients who ran self-hosted connectors. These clients operated the platform’s connector software in their own infrastructure, which introduced environment-specific issues that couldn’t be reproduced in our staging environment. I provided hands-on troubleshooting via Slack, helping clients diagnose configuration problems, network issues, and version incompatibilities. This direct client interaction gave me valuable insight into how our integrations performed in real-world conditions and informed priorities for future improvements.
Beyond integrations, I contributed to the platform-wide Java 8 to 21-24 migration, focusing on the services I owned. This involved dependency audits, vulnerability remediation, and adopting modern Java features like records, pattern matching, and sealed classes where they improved code clarity. I also built an MCP server with multiple tools that became a foundation for AI-powered internal workflows, and worked on integrating Langchain4j for AI-driven translation quality assessment and content processing capabilities that expanded the platform’s feature set.