Proposal: AppServer Migration to Microservices

Proposal: AppServer Migration to Microservices

This proposal outlines the architectural approach for migrating the Appserver from a monolithic system to a microservices-based architecture.

The diagram below provides an overview of the proposed microservices architecture. If the image is unclear, please refer to the accompanying PDF for a high-resolution version.

appserver-microservices_pages-to-jpg-0001.jpg

☝️ Download the PDF for the detailed high-resolution diagram.

Let’s look at the architectural diagram and its components in detail:

System Components:

  • API Gateway – a single public entry point.

  • Domain Services – small, focused services such as Project, User, Task, Notification, FCM, GitHub, Protocol, Questionnaire Schedule, and state-event services.

  • Per-Service Databases – each service has its own logical/private schema.

  • Per-Service Load Balancers – ensuring scalability and isolation.

  • Optional Infrastructure Components

    • Load balancers per service (gateway → service & service → service communication ).

    • Optional Service Registry (self-registration or delegated) with heartbeat checks.

    • Optional Service Mesh / Sidecar proxies for secure communication, monitoring, retries, and load balancing.

    • Centralized log collection with Elasticsearch.

    • Optional message broker for logs and for distributed transactions (eg: Kafka).

  • Inter-Service Communication – primarily REST, with contracts/clients provided as separate common gradlle libraries.

    • Gradle-based libraries and modules, including:

      • microservice-client/contract – client contracts per service.

      • appserver-commons – entities and shared business logic.

      • Other shared libraries as needed.

  • Deployment Model – per-module/service deployments with centralized logging.

  • Distributed Transactions – supported through patterns such as SAGA, 2PC, or CQRS/Event Sourcing.

Inter-Service Communication & Contracts:

The primary communication style between services is REST. Each service defines it own client or contarct in a dedicated library. Callers then use this library when making the requests, keeping the API contract centralized while keeping the callers lightweight and consistent.

  • Service Discovery:

For service discovery, services are expected to register and deregister with the service registery, either through self registration or delegated process. The registry must be highly available and liveliness is monitored through regular heartbeats.

As an alternative a service mesh with sidecar proxies can be introduced to handle features such as secure communication, monitoring, retries, and load balancing. However this approach comes with additional operational overhead and should be adopted only when the benefits outweigh the complexity.

Deployment Startegy & CI/CD:

The deployment approach follows a per-module model, where only the service or module that has been updated on the master branch is deployed. This ensures faster, more targetted releases and avoids unnecessary re-deployments.

Database Ownership And Startegy:

The preferred approach is to give each service its own dedicated database. This enforces strict ownership, keeps services loosely coupled, and allows each team to evolve its schema independently without impacting others. In cases where separate database instances are not feasible, services may share the same server, but each should still use a private schema with credentials granting access only to its own data.

This model provides strong flexibility but comes with some trade-offs While schema evolution is simpler on a per-service basis, cross-service joins and distributed transactions become harder to manage. We will cover strategies for handling distributed transactions in detail in the following sections.

To ensure consistency, each service should manage its schema changes using dedicated migration tooling such as Liquibase.

API Gateway, load balancers & distributed gateways:

The architecture describes the API Gateway as the single external entry point. It is responsible for routing requests and, if necessary, fanning out calls to multiple services, although this pattern is rarely required since service-to-service communication will handles such cases. Authentication is handled through JAX-RS filters applied at the gateway.

Each service also has its own dedicated load balancer, ensuring traffic can be distributed evenly and services can scale independently.

The recommended approach is to keep the gateway focused on concerns such as authentication, authorization, and rate-limiting, while avoiding any business logic. Authentication and authorization should be enforced at the gateway. Behind the gateway, all services should remain scalable, running behind a load balancer.

Libraries:

Client contracts should be packaged as separate modules (e.g., microservice-client/contract).

Shared logic, common entities, dtos can live in appserver-commons, but it should be lean to avoid unnecessary coupling.

Distributed Transactions, Consistency & Patterns:

As this architecture uses a database per service, some business transactions will naturally span multiple services. To handle these cases, we need mechanisms that can manage transactions across services while balancing consistency, scalability, and complexity. The main options are:

  • Two-Phase Commit (2PC): 2PC coordinates a distributed transaction across multiple services or databases to guarantee strong consistency. A coordinator first prepares all participants, then commits if all agree, or rolls back if any fail. It ensures strict ACID guarantees but is blocking, introduces a single point of failure, and adds significant performance overhead. It’s rarely recommended in microservices unless absolute consistency is needed.

  • SAGA Pattern: SAGA breaks a distributed transaction into a sequence of local transactions, each with a compensating action to undo changes if something fails. This makes it non-blocking and more scalable than 2PC. Two main styles exist: choreography (services emit and react to events) for simpler flows, and orchestration (a central coordinator directs the flow) for more complex ones. The trade-off is added complexity in designing and managing compensations.

  • CQRS (Command Query Responsibility Segregation): CQRS separates the write and read sides of a system. The write model handles commands and enforces business rules, while the read model is updated asynchronously and optimized for queries. This allows heavy read scaling but introduces replication lag, which must be handled carefully.

  • Event Sourcing: Event Sourcing records every change in the system as an immutable event rather than just storing the latest state. Current state is derived by replaying these events. This provides full auditability and supports rebuilding or debugging past states. The trade-off is added complexity in managing large event streams and handling immutability.

In summary, there are multiple patterns available for handling distributed transactions, each with its own strengths and trade-offs. The choice of approach depends on the needs and how much complexity we can manage.

Since this is a proposal, the patterns outlined here are recommendations rather than final decisions, the actual implementation may adapt or combine approaches as the system evolves and we learn more about real-world needs.