Docker container mirroring: SOA or monoliths?

I currently have a Java web application supported by semi-circular or microservices where each microservice interacts with 1+ support resources (DB, third party REST services, CRM, legacy systems, JMS, etc ..). Each of these components lives on 1+ virtual machines. Hence the architecture looks like this:

  • myapp.war

    lives both on myapp01.example.com

    and onmyapp02.example.com

    • Connects to dataservice.war

      , living on dataservice01.example.com

      and dataservice02.example.com

      , which connects tomysql01.example.com

    • myapp.war

      also connects to crmservice.war

      , living on crmservice01.example.com

      , which connects tohttp://some-3rd-part-crm.example.com

Now I'll say that I wanted to "Dockerify" my entire application architecture. I would write 1 Docker image for each component type ( myapp

, dataservice

, mysql

, crmservice

etc.), or I would have written one "monolithic" container that contains all the applications, services, databases, brokers (the JMS) messages, etc. ?

I'm sure I can do it anyway, but the root of my question is this: Are Docker containers meant to host / store a single application, or are they meant to represent an entire environment made up of multiple interconnected applications / services?

+3


source to share


1 answer


The Docker philosophy definitely dictates that you create separate Dockerfiles for each app, service, or support resource you use, and then link them.

You can use Docker Compose to run different Docker containers: Django and Rails .



In addition, tools like kubernetes or ECS allow you to manage the complete lifecycle and infrastructure of your entire environment, including auto scaling, load balancing, and more.

+2


source







All Articles