versioning – The meaning of “Fix Version” field in Jira, while working in agile, microservices and CI/CD processes

When working on a monolith application in a “waterfall” process, a “Fix Version” field makes a lot of sense.

Say you have the “November 2020” planned release, you can add/plan dozens of relevant bugs and features to that release. Other bugs and features may be planned for the next release of “December 2020” and so on. So the “Fix version” field can have the same value for multiple issues/tickets.

Moving to a microservices architecture, the above process is significantly changed – Each microservice has its own release cycles that are completely independent of other MS. If a development team is responsible for 10 different MS, theoretically, each one of them can and should be released without any coupling to the others.

When we add on top of that a CI/CD process, things get even more volatile:

When each and every commit to the master results in a full automation tests suite and a potential (or de facto) version that can be deployed to staging/production, then every commit has its own “Fix Version”.

Taking it to the Jira world (or any other issue tracking system) the meaning of that is that each and every ticket/issue may have its own “Fix Version”. No more a common value that is shared across many tickets, but a disposable “Fix Version” value that will be used once.

Moreover, when an issue is created, you have no way to know in which build it will end up, as other tasks may finish before or after it.

In Jira, creating a “Fix Version” is manual, making the process of updating and ticket’s “Fix Version” a tedious and error-prone work.

So my questions are:

  • Are the above assumptions correct?
  • Is there a meaning to a “Fix Version” field in an Agile + Microservices + CI/CD environment?
  • How do you handle the “Fix version” challenge?
  • How do you do it in Jira?

microservices – Integration of socket service in reactor pattern: interfacing dispatcher with socket

I need an insight on how to better organize my project which implements a Reactor pattern, Flux specifically.

My struggle stands in accomodating a Socket Service Layer which is purely needed to interface Socket (Inbound Data) and Dispatcher (Outbound data, internal operations).

In particular, I would like to find a solution (or a pattern extension) on how to achieve a wiser organization (from a project structure POV) of socketHandlers and dispatcherHandlers (see last picture)

Pattern implementation
enter image description here

Self-explanatory code

class SocketService{
        this._socketListeners = ();
        this._dispatcherListeners = ();   
    init(io){ // Io is the server socket => {
    onConnectionReceived(socket){ // socket represents client connection => {

const exampleDispatcherListener = (io) => {
        ({whoSays}) => {
  , whoSays)

const exampleSocketListener = (socket) => {
    socket.on(SocketEvents.SAY_HELLO, async ({whoSays}) => {
            actionType: DispatcherEvents.USER_SAYS_HELLO,
            data: { whoSays }
        }); // This will trigger

Project structure, folders of interest

socket.js // Binds socket to the server - initializes it.
  socket.js // Here's our SocketService, 
socket-dispatcher-handlers // Unsure if this is where socketHandlers should be
socket-handlers // Unsure if this is where socketHandlers should be

design – What is the most common stateless way for authentication in microservices?

I’m trying to get into Microservices by creating a project, so far I’ve stumbled upon authentication mechanisms, in a monolithic architecture, the client (web app) would send a request with the user credentials to receive a JWT and send it in every upcoming request. However, when it comes to Microservices I found out that there were two different ways to achieve the same, stateless authentication mechanism, either by delegating the authentication and authorization responsibilities to the gateway service, or by using OAuth2 (or OpenID Connect??). Most online resources recommend OAuth2, however isn’t it used for service-to-service authorization? Should I use the authorization code flow with password grant?

The API is only going to be accessed by a single client (web app), so no other service is going to ask the user for their data consent or something.

java – Single DB – Multitenancy with microservices

We are migrating from a monolith to microservice.

Note : We store the tenant details in master tenant db which is seperate from the application database


  1. The app serves multiple tenants and has around 10 sub modules which are tightly bound.

  2. We are splitting the 10 sub modules into 10 different services and planning to use single database for all the sub modules.

  3. There are around 20 tables in a single database without references.

Plans to:

  1. Keep the connection establishment , model , DTO , response classes (all common classes for all modules are placed in a base class)
  2. Add the base class as a dependency module for all the other 10 sub-modules.

I have the following question for which I couldn’t get any proper responses while surfing the internet.

1) Is the above `Plan to` a valid approach ?
2) Will each sub-module try to make its own connection ? 
3) How to address the issue of base class becoming huge and is duplicated in all sub-modules ?
4) Will this cause any deadlocks since each service will connect to same db at any given time ? 

Thanks in advance.

microservices – Refactoring and moving monolithic ASP.NET Core app to Kubernetes

(This is my very first post here, so I’m sure I violate a few rules of this community, mainly by asking too much questions in a simple post. Sorry for that.)

I have a monolithic ASP.NET Core MVC application which I develop as a hobby project for private use. It can download videos and music from several video sharing websites by using youtube-dl. This application simply shell executes youtube-dl using the correct cmdline arguments, captures stdout and stderr, does some parsing on it, and reports any status changes real-time to the frontend using SignalR. Also, every day when it is idle, shell executes youtube-dl to update itself. I made some changes in the code to limit the maximum parallel downloads to a specified value in appsettings.json.

It does its job fine on a single server. But at work I became an architect associate, learned about Kubernetes, and from scratch I never made my own software for Kubernetes. So, I want to make the following changes:

  1. Refactor the code from monolithic to microservice architecture…
  2. …by considering Kubernetes as the target platform.

Question 0: Does it make sense to make these changes and migrate this project to Kubernetes, considering how youtube-dl is used?

On the backend side, I have two tasks, and both of them requires youtube-dl.

  1. By using the URI given by the user, I have to collect information about the media (title, description, thumbnail image). This task is only network-heavy until youtube-dl gathers the media information and it is done within a few seconds.
  2. Also by the given URI, I have to download the media and present a file to the user. Also, I need to keep the real-time progress report. This is both network and CPU-heavy, since youtube-dl not only downloads the media, but alsp performs video merging and video to audio conversion using FFmpeg. So it takes some time and CPU resources.

Question 1: Sould I separate these responsibilities to two different backend microservice? If I have it separated, I would be able to keep only a few instances for the first task, and several instances for the second (splitted to different nodes). On the other hand, I need to use youtube-dl for both the tasks, so it would also make sense to put it in one service.

My next consideration is the real-time process feedback to the frontend. By separating this project, I have to make a chain: DownloaderMS->FrontendMS->HTML frontend. In the old world I used SignalR. According to the Microsoft docs, using SignalR between DownloaderMS and Frontend does not violate the microservice architecture until I use Redis backplane.

Question 3: By considering Kubernetes and load balancing, is it a good idea to keep using SignalR both on frontend and backend side? Or should I use something else?

The software needs some storage space to store the downloaded media. I think I should have a persistent volume with read-write access for the backend services with 1 hour data retention.

Question 4: Should the backend microservices encapsulate the persistent volume and provide the downloaded media for the FrontendMS on an API endpoint, or is it allowed in microservice architecture to let the FrontendMS access this volume but with read-only permissions?

Question 5: Do you have any other considerations or recommendations?

design – Building a Microservices App — Can you give feedback on architecture?

I did some googling, and I was directed to Software Engineering to ask architecture questions. If you know of a different forum that could help me, please direct me to it

I recently started learning about microservices, and would like to build an experimental app (the backend) just for practice. I’ll explain the app requirements, and after that outline my microservices-based solutions (and some doubts/questions I have). I’d love to get your feedback, or your approach to building this app using microservices.

Please note: I am a beginner when it comes to microservices, and still learning. My solution might not be good, so I’d like to learn from you.

The App (Silly App):

The purpose of this app is to make sure users eat carrots four times a week. App admins create a carrot eating competition that starts on day x and ends 8 weeks after day x. Users can choose whether or not to participate in the competition. When a user joins the competition, they need to post a picture of themselves eating a carrot. The admin approves/rejects the picture. If approved, the carrot eating session counts towards the weekly goal, otherwise it does not. At the end of each week, participating users are billed $10 for each carrot eating session they missed (for example, if they only eat carrots two times that week, they’re billed $20). That $20 goes into a “money bucket”. At the end of two months, users who successfully ate carrots four times a week every single week divide the money in the bucket among themselves. For example, assume we have users A, B, C. User A missed all carrot eating sessions for two months (puts $40 a week in the money bucket, so $320 by the end of two months). Users B and C eat their carrots four times a week consistently for two months. So users B and C take home $320/2 = $160.

I wanted to start simple. Forget about money. Forget about admin approval. We can add that later. For now, let’s focus on a very simplified version of the app.

  • User can signup/login/logout to app
  • When a user signs up, they are automatically enrolled into the next carrot eating competition
  • Users can post an image of him/herself eating a carrot
  • Users can see a feed of other users images (similar to instagram, except all pics are of people eating carrots)
  • Users can access their profile – a page that displays how they’re doing in the competition: I.e,for each week, how many carrots they ate. And which weeks they failed at.
  • At any point in time, users can access a page that shows who the current winners are (i.e, users who did not miss a carrot eating session yet).

Is this an appropriate simplification to start with?

Thinking Microservices – Asynchronous Approach:

Auth Service: Responsible for Authenticating User


  • User Table: id, username, email, password


  • POST /users/new : signup
  • POST /users/login: login
  • POST /users/signout: signout


Image Service: Responsible for Saving Images (upload to Amazon S3)


  • User Table: userId, username
  • Image Table: imageId, userId, dateUploaded, imageUrl


  • POST /users/:userId/images: Post new image
  • GET /users/:userId/image/:imageId: Return a specific image
  • GET /images: Return all images (Feed)


  • Publishes:
    • Image:created (userId, imageId, imageUrl, dateUploaded)

Competition Service: Responsible for managing competition


  • Competition table: id, startDate, duration
  • User table: id, username, competitionId, results (see below)


  • POST /competition: create a competition
  • GET /competition/:competitionId/users/:userId: get results for a specific user
  • GET /competition/:competitionId/users: get a list of users participating in competition (see below)
  • GET /competition/:competitionId: get a list of winners, and for each looser how many workouts they missed


  • Listens:
    • User:created
    • Image:created

In the database, user table, Results is the JSON equivalent of

results = {
   week1: {
       date: 'oct 20 2020 - oct 27 2020',
       results: ('mon oct 20 2020', 'tue oct 21 2020', 'thur oct 23 2020'),
   week2: {
       date: 'oct 28 2020 - nov4 2020',
       results: ('somedate', 'somedate', 'somedate', 'somedate'),
   week3: {
       date: 'nov 5 2020 - nov 12 2020',
       results: (),

Better ideas on how to store this data appreciated

GET /competition/:competitionId returns

const results: {
 winners: ({ userId: 'jkjl'; username: 'jkljkl' }, { userId: 'jkjl'; username: 'jkljkl' });
 loosers: (
   { userId: 'kffjl'; username: 'klj'; carrotDaysMissed: 3 },
   { userId: 'kl'; username: 'kdddfj'; carrotDaysMissed: 2 }

What do you think of this? How would you improve it? Or would you approach this from an entirely different way?

microservices – Where to place an in-memory cache to handle repetitive bursts of database queries from several downstream sources, all within a few milliseconds span

I’m working on a Java service that runs on Google Cloud Platform and utilizes a MySQL database via Cloud SQL. The database stores simple relationships between users, accounts they belong to, and groupings of accounts. Being an “accounts” service, naturally there are many downstreams. And downstream service A may for example hit several other upstream services B, C, D, which in turn might call other services E and F, but because so much is tied to accounts (checking permissions, getting user preferences, sending emails), every service from A to F end up hitting my service with identical, repetitive calls. So in other words, a single call to some endpoint might result in 10 queries to get a user’s accounts, even though obviously that information doesn’t change over a few milliseconds.

So where is it it appropriate to place a cache?

  1. Should downstream service owners be responsible for implementing a cache? I don’t think so, because why should they know about my service’s data, like what can be cached and for how long.

  2. Should I put an in-memory cache in my service, like Google’s Common CacheLoader, in front of my DAO? But, does this really provide anything over MySQL’s caching? (Admittedly I don’t know anything about how databases cache, but I’m sure that they do.)

  3. Should I put an in-memory cache in the Java client? We use gRPC so we have generated clients that all those services A, B, C, D, E, F use already. Putting a cache in the client means they can skip making outgoing calls but only if the service has made this call before and the data can have a long-enough TTL to be useful, e.g. an account’s group is permanent. So, yea, that’s not helping at all with the “bursts,” not to mention the caches living in different zone instances. (I haven’t customized a generated gRPC client yet, but I assume there’s a way.)

I’m leaning toward #2 but my understanding of databases is weak, and I don’t know how to collect the data I need to justify the effort. I feel like what I need to know is: How often do “bursts” of identical queries occur, how are these bursts processed by MySQL (esp. given caching), and what’s the bottom-line effect on downstream performance as a result, if any at all?

I feel experience may answer this question better than finding those metrics myself.

Asking myself, “Why do I want to do this, given no evidence of any bottleneck?” Well, (1) it just seems wrong that there’s so many duplicate queries, (2) it adds a lot of noise in our logs, and (3) I don’t want to wait until we scale to find out that it’s a deep issue.

microservices – Service integration with large amounts of data

I am trying to assess the viability of microservices/DDD for an application I am writing, for which a particular context/service needs to respond to an action completing in another context. Whilst previously I would handle this via integration events published to a message queue, I haven’t had to deal with events which could contain large amounts of data

As a generic example. Let’s say we have an Orders and Invoicing context. When an order is placed, an invoice needs to be generated and sent out.

With those bits of information I would raise an OrderPlaced event with the order information in, for example:

public class OrderPlacedEvent
    public Guid Id { get; }
    public List<OrderItem> Items { get; }
    public DateTime PlacedOn { get; }

from the Orders context, and the Invoicing context would consume this event to generate the required invoice. This seems fairly standard but all examples found are fairly small and don’t seem to address what would happen if the order has 1000+ items in the order, and it leads me to believe that maybe integration events are only intended for small pieces of information

The ‘easiest’ way would be to just use an order ID and query the orders service to get the rest of the information, but this would add coupling between the two services which the approach is trying to remove.

Is my assumption that event data should be minimal correct? if it is, how would I (or even, is it possible to?) handle such a scenario where there are large pieces of data which another context/service needs to respond to, correctly?

microservices – RabbitMQ messages reliability for at-least-once delivery

I wanted to ask what are the possibilities of maintaining reliability when exchanging messages between microservices, when one of those message is rejected. As of today, we don’t want to lose messages, so we are re-queueing these rejected messages. Consequently, there is a risk that the messages will be processed in a different order than expected. As a simple example, let’s take a simple architecture of two microservices A and B. A publishes two messages one after the other:

{"productStatus": "PUSBLISHED", "productId": 1}
{"productStatus": "SOLD", "productId": 1}

Microservice B rejects the first message but accepts the second. Due to the fact that we are re-queuing all rejected messages, we are processing the message with the PUBLISHED producer status again, and thus microservice B has incorrect information about the product 1 status. A simple solution that comes to mind is to add objectVersionNumber to each message so that the consumer can tell that he has processed the latest version of the object:

{"productStatus": "PUSBLISHED", "productId": 1, "objectVersionNumber": 1}
{"productStatus": "SOLD", "productId": 1, "objectVersionNumber": 2}

Are there any other approaches to solving this problem? Perhaps the semantics of the message should completely change?

architecture – How to abstract a payment service to make developing new payment microservices faster

I am part of a team that is primarily working on payments integration. Creating a new microservice to handle a new payment type and integration takes us so much time and involves a lot of boilerplate and duplication especially when it comes to creating cloud infrastructure.

We currently build out our services on the AWS platform. We use Terraform for codifying
our infrastructure and we build all services using the microservices pattern. We do this currently by using AWS route53 latency routing to either to a backing AWS API Gateway or Load balancer that then routes requests in a round-robin fashion to docker containers in AWS ECS or AWS Fargate clusters. The domain routing is made up of a global domain and then regional domains mapped to the regional services (From the API Gateway or ALB inwards into the system, most infrastructure are regional infrastructure). We use AWS WAF and Shield for security.

We have somehow agreed on a way to optimise the infrastructure part. What is remaining now is the functional parts that run within the containers. We build those in Javascript/NodeJS/Express and store data in AWS DynamoDB. Most of our code is functional javascript code where Classes are used minimally or not at all.

I have recommended the use of OOP features like Interfaces/Abstract Classes/Inheritance. As the concept of Interfaces is not available in Javascript, I can see how promising the Inheritance feature can be for this if we create a base class that has unimplemented or default implementations of common payment service functionalities as functions. All the concrete payment types (e.g. Direct Debit, Credit Card, PayPal, etc) can inherit this base class and override these common functions while using mixins to extend a class’ functionality with functionality from other useful utility classes or objects that might be required in multiple places or by multiple payment methods (but not all of them).

The issue is that a key member of the team believes in sticking to the functional approach as much as is possible.

Please provide your thoughts and propose a better approach if you can. Is it also possible to use mixins alone and drop using OOP’s inheritance altogether in this case?

Thank you very much for your responses in advance.