architecture – Database Design for Form Builder

Hi guys I’m trying to create Form builder in my app for different Events that have happened and publish them, so the users can fill them out.

Little About Logic:

Forms are created per different Event that has happened, and then users can popuplate and view them

As for the database I’m using Postgres.

Here is how my current models look:

Form model:

form
-----
  id (PK)
  name
  user_id (FK to users.id)

Form Element

form_element
-------------
  id (PK)
  form_id (FK to form.id)
  element_type_id (FK to element_type.id)

Element Type (could be text_field, radio, dropdown)

element_types
-------------
  id (PK)
  name

Element Values (Stores Values inputed by User)

Form Element value
===============
    id (PK)
    element_id (FK to form_elements.id)
    user_id (FK to user.id)
    value

The form is populated with form_elements that reference the forms. And has a relationship 1 to many (one form has many fields). element_types is used so they know which type they are. element_types will store a list of different elements a form can have. Types could be: “text_field”, “drop_down_list”, “radio_buttons”, “checkbox”.

Now I have few questions about my Database Design ?

  1. Form Element model and Element Type model should have 1 to 1 relationship ?
  2. How should I create for example type Dropdown where would I store additional fields for Dropdown options in a new field, in a new Table, how would I design that ?
  3. Element Values table should be Many to Many, should I have to store user_id per Field valoue or could i have somehow bind it to Form ?

information architecture – Assign people to organizations, or organizations to people?

We’re designing a system that allows administrators to create an infinite number of users, and create an infinite number of organizations.

Most users will only belong to one organization, but some belong to more than one (and it might be necessary to remove a user’s access from all organizations quickly). An organization might have just one user, or many. There’s also a little hierarchy involved, where a user could belong to a parent organization, or one/all of its children.

There’s a little debate: Is it better UX to assign organizations at the user level, or users at the organization level? Either way, we’re thinking it’s best to provide access to the information in both places. But if you could only choose one place for setup and editing, which would you choose, and why?

architecture – Different APIs, same database

I have this game project in which two really different REST APIs (one for admin and the other for the player) should query the same data.

The backend is built in nodejs, btw.

There are two ways to do this, that I’m aware of:

  1. Use a role-system to protect endpoints from one or other kind of user
  2. Use a shared “microservice” to query database with two api gateways (backend for frontend)

The first I dislike because it’s messy and code tends to be ugly. The second I dislike because microservices are definitely a pain to handle and development speed gets really slowed down.

How would you guys approach this?

Thanks

architecture – Using rest services or python modules directly?

Imagine you’re in a Python setup, with all of the projects you’re working on. Now, as a base line, you have a Python module that is the backbone (backbone.py) to everything when it comes to getting data from a datasource. There is also a FastAPI implementation of this this backbone (backbone-api.py), offering a handy way of getting the data out.

Now, there is another project emerging, which needs to use the data of the backbone. Also implemented with python and FastAPI (different-angle.py). Pretty much just a different angle of the data.

There appear to be two options:

Option A:
Importing the backbone module directly into the new project, to allow direct access to the data and best performance.

Option B:
Using the FastAPI implementation of the backbone, to simplify the process, but at the cost of latency.

To my mind, Option A seems to be slightly better. This is mostly as this will increase performance and leverages Python modules better. Option B has the advantage of being an easy setup, as you don’t need to install the backbone module and its configuration. This is probably also fine for smaller operations, but Option B would probably have some disadvantages when it comes to returning larger amounts of data.

enter image description here

architecture – Should 2 different MySql databases from 2 different applications be joined together if the 2 apps are merging into 1?

This is a question about software architecture, not databases.

I currently have 2 completely separate web applications, call them A and B for simplicity. They are on their own machines, with their own databases and web servers. My job is to integrate these 2 applications together (how I’m doing that from the front/backend is irrelevant here).

My scenario when it comes to the database is this: previously apps A and B were separate, but now B is being treated as a module, and will be used inside of A. To the user, it will seem like there is only one application. Behind the scenes though, is it best practice to keep the databases separate, or is it best for app A’s database to completely absorb the tables from app B’s database and just create one, very large database?

The data isn’t related between the two applications, so to me it makes sense to keep the 2 databases separate since B is acting as it’s own module still, regardless of whether it’s a part of app A or not. What do companies that have a large application do? Do they create a new database for every large component of their software or do they keep it all in one place?

architecture – Synchronizing clients – Game Development Stack Exchange

I have a server-client setup where each client has a number of screens attached, and the screens together form the display. As such, the visuals displayed by each client needs to be roughly in sync. Luckily the domain is not high speed, so I don’t have to have them all perfectly in sync, but obviously more in sync is better.

I am targeting a 100ms lead time between receiving states and acting upon them, I’m operating in a LAN so that is plenty, packets are sent out at 100Hz and interpolation is handled through a buffer which automatically selects the right packets. So I don’t have to worry about that. Visuals are running at 60 FPS so there is minimal difference frame to frame.

Right now there are two basic solutions: 1. have each client sync with a time server over the internet or locally, or 2. emulate the same logic in the code and do the synchronization as part of establishing connection. What would be the best way to synchronize time?

I’m concerned that I’m running into an XY problem here assuming that synchronizing time this way is a good idea at all.

I have noticed that existing questions on this topic are about 1 server connecting to several clients, each showing their own visuals. I think this problem of requiring sync between clients to maintain visual continuity is a bit different, so maybe there will be different approaches. I am aware of other questions that discuss the more common situation though.

design – Building a Microservices App — Can you give feedback on architecture?

I did some googling, and I was directed to Software Engineering to ask architecture questions. If you know of a different forum that could help me, please direct me to it

I recently started learning about microservices, and would like to build an experimental app (the backend) just for practice. I’ll explain the app requirements, and after that outline my microservices-based solutions (and some doubts/questions I have). I’d love to get your feedback, or your approach to building this app using microservices.

Please note: I am a beginner when it comes to microservices, and still learning. My solution might not be good, so I’d like to learn from you.

The App (Silly App):

The purpose of this app is to make sure users eat carrots four times a week. App admins create a carrot eating competition that starts on day x and ends 8 weeks after day x. Users can choose whether or not to participate in the competition. When a user joins the competition, they need to post a picture of themselves eating a carrot. The admin approves/rejects the picture. If approved, the carrot eating session counts towards the weekly goal, otherwise it does not. At the end of each week, participating users are billed $10 for each carrot eating session they missed (for example, if they only eat carrots two times that week, they’re billed $20). That $20 goes into a “money bucket”. At the end of two months, users who successfully ate carrots four times a week every single week divide the money in the bucket among themselves. For example, assume we have users A, B, C. User A missed all carrot eating sessions for two months (puts $40 a week in the money bucket, so $320 by the end of two months). Users B and C eat their carrots four times a week consistently for two months. So users B and C take home $320/2 = $160.

Simplification:
I wanted to start simple. Forget about money. Forget about admin approval. We can add that later. For now, let’s focus on a very simplified version of the app.

  • User can signup/login/logout to app
  • When a user signs up, they are automatically enrolled into the next carrot eating competition
  • Users can post an image of him/herself eating a carrot
  • Users can see a feed of other users images (similar to instagram, except all pics are of people eating carrots)
  • Users can access their profile – a page that displays how they’re doing in the competition: I.e,for each week, how many carrots they ate. And which weeks they failed at.
  • At any point in time, users can access a page that shows who the current winners are (i.e, users who did not miss a carrot eating session yet).

Is this an appropriate simplification to start with?

Thinking Microservices – Asynchronous Approach:

Auth Service: Responsible for Authenticating User

Database:

  • User Table: id, username, email, password

Routes:

  • POST /users/new : signup
  • POST /users/login: login
  • POST /users/signout: signout

Events:

Image Service: Responsible for Saving Images (upload to Amazon S3)

Database:

  • User Table: userId, username
  • Image Table: imageId, userId, dateUploaded, imageUrl

Routes:

  • POST /users/:userId/images: Post new image
  • GET /users/:userId/image/:imageId: Return a specific image
  • GET /images: Return all images (Feed)

Events:

  • Publishes:
    • Image:created (userId, imageId, imageUrl, dateUploaded)

Competition Service: Responsible for managing competition

Database:

  • Competition table: id, startDate, duration
  • User table: id, username, competitionId, results (see below)

Routes:

  • POST /competition: create a competition
  • GET /competition/:competitionId/users/:userId: get results for a specific user
  • GET /competition/:competitionId/users: get a list of users participating in competition (see below)
  • GET /competition/:competitionId: get a list of winners, and for each looser how many workouts they missed

Events:

  • Listens:
    • User:created
    • Image:created

In the database, user table, Results is the JSON equivalent of

results = {
   week1: {
       date: 'oct 20 2020 - oct 27 2020',
       results: ('mon oct 20 2020', 'tue oct 21 2020', 'thur oct 23 2020'),
   },
   week2: {
       date: 'oct 28 2020 - nov4 2020',
       results: ('somedate', 'somedate', 'somedate', 'somedate'),
   },
   week3: {
       date: 'nov 5 2020 - nov 12 2020',
       results: (),
   },
   ...
}

Better ideas on how to store this data appreciated

GET /competition/:competitionId returns

const results: {
 winners: ({ userId: 'jkjl'; username: 'jkljkl' }, { userId: 'jkjl'; username: 'jkljkl' });
 loosers: (
   { userId: 'kffjl'; username: 'klj'; carrotDaysMissed: 3 },
   { userId: 'kl'; username: 'kdddfj'; carrotDaysMissed: 2 }
 );
};

What do you think of this? How would you improve it? Or would you approach this from an entirely different way?

coding standards – Does default settings should be considered as business rules from the view point of “Clean Architecture”?

The examples of default settings:

  • Default port for server applications
  • Default resources directory for Java Applications
  • Default Webpack config file (“webpack.config.js”)

My particular case

I have the utility for web application projects building, based on gulp and webpack.

enter image description here

When user did not specified some settings in config file of his project (similar to pom.xml, webpack.config.js, etc.) DefaultSettins will be substituted. The example of default settings for markup preprocessing:

{
  indentationSpacesCountInOutputCode: 2,
  mustExecuteHTML5_Validation: true,
  mustExecuteCodeQualityInspection: true,
  mustExecuteAccessibilityInspection: true
}

Maybe I can call it “the business rules”, but it including, for example, the paths which must not be watched for browser live reloading providing for specific framework:

{
  laravelBasedProjectRelativePathsWhichWillBeWatched: {
    includedDirectoriesRelativePaths: (
      "app",
      "bootstrap",
      "config",
      "database",
      "public",
      "resources",
      "routes",
      "storage",
      "tests"
    )
  }
}

In Clean Architecture, the business rules must not know anything about frameworks.
And also, I need put somewhere the default settings for specific utilities (e. g. Webpack).

The InternalSettings (I call them so because user can not override it) are currently restrictions of my library, for example, supported file names extensions:

export default {
  supportedSourceFileNameExtensionsWithoutDots: ( "mjs", "js", "ts" ),
  supportedOutputFileNameExtensionsWithoutDots: ( "js" )
};

I am not sure I can call it “business rules”.

If default and internal settings are no the business rules what’s they are?

hyper v – Storage Spaces Direct and Virtualization Architecture

I’m trying to figure out the best way to architect a 2 node fail over cluster with the Hyper V role installed. I can really use some input and suggestions from others who have already been down this road.

All in, I have a 4 physical machines with Datacenter 2019 installed on each. On machine 1 and 2 I have installed 1 VM each and clustered them together as a network load balancer. This work great, no problems here.

On machines 3 and 4 I want to create a Storage Spaces Direct Fail Over Cluster. On these 2 machines I also want to virtualize many services in VM’s. Sql Server, A File Server, Email Server etc.

What I am not grasping is as follows. Should I create the Storage Spaces Direct Failover cluster on the Host level or at the Hyper-V VM level? Obviously I need the data replicated across both machines should one machine go down.

I am not sure what the best approach here is.

Thanks in advance.

architecture – Instantiating GameObjects in Custom Game Engine

So,i’m having a bit of an issue with instantiating my gameobjects into to universe(My Scene object).

I can create an empty object from scratch and populate it from there with ease,my problem starts when i try to clone the gameobject which i just created.

My steps are

  1. I create an empty gameobject (GAMEOBJECT A)
  2. I add an component to it (COMPONENT A)
  3. Now i create a gameobject B from A
  4. GameObject B created,and the component A is in the component list of the gameobject B
  5. I try to get the component using inheritance but nullptr returns

It seems when i try to clone components,it returns only the base class pointer.There i try to recreate the derived class and add it to new gameobject’s component list but obviously i fail.

  • If i cant solve this with inheritance how can i solve it ?
  • is there better way to implement such a desing ?
  • or i just fail to use inheritance in c++.If it’s, can you guys point out what am i doing wrong here

This is my GameObject AKA RGameElement

enter image description here

This is my Component AKA RElementComponent

enter image description here

and this my Instantiate via gameobject ref function

enter image description here