docker – What is / are the best threat modeling method(s) for container security?

I am currently researching threat modeling for container security, I am wondering which methods are the best for container security. Till now I got the conclusion that STRIDE is most used and it is used as well for container security because it is easy to understand and each threat is easy to map to the CVE database.

Maybe in the community are people with experience in the field that can advise or share their experience about what is the best threat modeling method for container security and why.

Thanks.

Database design – best method for modeling a string matrix (postgreSQL)

My goal is to make this as convenient as possible (i.e. finding the least external tables in the application) and efficient (least redundant indexes).

I have to model data that would ideally look like this without optimization:

.______________________________.
|   en-US   |  pt-BR   | zn-CN | ...
|===========|==========|=======|
|university | academia |  大学 |
| self-made |  NULL    |  自制 |
|  NULL     | saudade  |  思   |
'-----------'----------'-------'

The main use of the application is to retrieve words from one locale and find the other locale.

The caveats are:

  • I don't know the number of locales. Ideally, they can be created by the application at runtime, but it's okay to make a new locale an Ops event that requires downtime and table changes.
  • There is no "main" restaurant.
  • If we extract terms in a locale, word Table there is the problem of A,B == B,A, but we can get around this by having logic to normalize B, A as A, B to avoid N ^ N relationships. But it would be nice if the scheme didn't have this problem at all.
  • Bonus: I want to avoid repeating the words / indexes everywhere, as is the case with gnu gettext data files.

Ideally, queries would be as convenient as select en-US from words where zn-CN='nihao'

played with some alternatives on db <> fiddle) (https://dbfiddle.uk/?rdbms=postgres_12&fiddle=b22dddd5e7b70cdf9f61500ef076cda9), but none is nearly optimal.

create table A (
  locale text NOT NULL,
  word text NOT NULL,
  primary key (locale, word)
);
insert into A values
('en-US', 'hi'),
('zn-CN', 'nihao'),
('pt-BR', 'oi');

select * from A limit 5;
3 rows affected

locale | word 
:----- | :----
en-US  | hi   
zn-CN  | nihao
pt-BR  | oi   
create table B (
  locale text NOT NULL,
  word text NOT NULL,
  locale2 text NOT NULL,
  word2 text NOT NULL,
  PRIMARY KEY(locale, word, locale2, word2),
  FOREIGN KEY(locale,word) REFERENCES A(locale,word),
  FOREIGN KEY(locale2,word2) REFERENCES A(locale,word)
)

Table B would be noisy, but would allow queries that came close to the ideal. But I'm afraid to look at all if engine indexing would be optimal with just these keys.

-- only 3 words is already a mess without application normalization logic
insert into B values
('en-US', 'hi', 'pt-BR', 'oi'),
('pt-BR', 'oi', 'en-US', 'hi'),
('en-US', 'hi', 'zn-CN', 'nihao'),
('pt-BR', 'oi', 'zn-CN', 'nihao'),
('zn-CN', 'nihao', 'en-US', 'hi'),
('zn-CN', 'nihao', 'pt-BR', 'oi');

select * from B limit 10;
6 rows affected

locale | word  | locale2 | word2
:----- | :---- | :------ | :----
en-US  | hi    | pt-BR   | oi   
pt-BR  | oi    | en-US   | hi   
en-US  | hi    | zn-CN   | nihao
pt-BR  | oi    | zn-CN   | nihao
zn-CN  | nihao | en-US   | hi   
zn-CN  | nihao | pt-BR   | oi   
     create table C (
       locale text NOT NULL,
       word text NOT NULL,
       locale2 text NOT NULL,
       word2 text NOT NULL,
       FOREIGN KEY(locale,word) REFERENCES A(locale,word),
       FOREIGN KEY(locale2,word2) REFERENCES A(locale,word)
     );
- we can probably normalize somehow by selecting precedence for locales, e.g. en-US < pt-BR < zn-CN
insert into C values
('en-US', 'hi', 'pt-BR', 'oi'),
('en-US', 'hi', 'zn-CN', 'nihao'),
('pt-BR', 'oi', 'zn-CN', 'nihao');
3 rows affected
select word2 as translation from C where locale='en-US' and word='hi' and locale2='zn-CN'; 
| translation |
| :---------- |
| nihao       |
-- application must search on other side if locale normalization says so
select word as translation from C where locale2='zn-CN' and word2='nihao' and locale ='en-US';
| translation |
| :---------- |
| hi          |

... and I have not even started to check whether the engine optimizes the text keys correctly if they are repeated in all relationship tables.

create table A (
  id SERIAL PRIMARY KEY,
  locale text NOT NULL,
  word text NOT NULL,
  UNIQUE  (locale, word)
);
insert into A (locale, word) values 
('en-US', 'hi'),
('zn-CN', 'nihao'),
('pt-BR', 'oi');
select * from A limit 5;
id    locale  word
1     en-US   hi
2     zn-CN   nihao
3     pt-BR   oi
create table B (
  id serial PRIMARY KEY,
  word1 serial references A(id),
  word2 serial references A(id),
  UNIQUE (word1,word2),
  UNIQUE (word2, word1) )

insert into B (word1, word2) values
(1, 2),
(1, 3);
select * from B;
id    word1   word2
1     1   2
2     1   3

But the queries are not too convenient

select word from A where locale='en-US' and id in (
   select word1 from B where word2 = (
      select id from A where locale='zn-CN' and word='nihao')
  UNION
   select word2 from B where word1 = (
      select id from A where locale='zn-CN' and word='nihao')
);

Index mises everywhere

https://dbfiddle.uk/?rdbms=postgres_12&fiddle=1e83c68b2f90f883340b1d24bb1e2a36

SQL Server – best practice for modeling data that is both general (standard) and entity-specific

I have already tried to look for good instructions, but without much luck. Nevertheless, we apologize in advance if this is duplicated elsewhere.

In short, we pay external contractors to handle cases for our customers. We already have tables with contractor and customer information in our SQL Server database. In the future, we would also like to save accounting and billing information there. The remuneration rates can be different for each customer and contractor, but usually each customer has a general "standard" wage rate that applies to most contractors.

The original suggestion was to create a new table with the following basic design:

clientContractorPay

  • clientID – Foreign key too client table
  • contractorID – Foreign key too contractor table
  • basePay – Wage rate for this customer-contractor combination
  • ... – Several other columns (10+ and probably larger) with additional information on the wage rate
  • A unique index to optimize search and avoid multiple lines for a particular client-contractor combination.

Contractor-specific remuneration rates would of course be bound to the respective contractor (and customer). The general (standard) payment for a customer is saved on a line in the contractorID is NULL. In this way it is avoided that the same standard remuneration is duplicated for all contractors for whom no special exceptions apply.

However, one of our senior developers has strong reservations about option A. Their main argument is usage NULL by doing contractorID The column "This is the line with the standard wage rate" is not intuitive and / or confusing. In other words, assigning meaning is bad NULL Values.

Your counter-proposal was to duplicate these new wage rate columns in the client table. The data stored there indicates the standard remuneration for each customer, while contractor-specific exceptions continue to be listed in the new table above.

It seems clear that both suggestions would work well, but I have my own reservations about the second. Mainly, it seems wrong to store the same data type (customer-contractor tariff information) in multiple places, not to mention a more complex logic for reading / writing this data. I also don't like duplicating these new columns in both tables as this would force us to add future columns with salary rates to both tables.

However, I can see my colleague's point about possible abuse NULL in this case. At least it is not immediately obvious that lines with a NULL contractorID Standard wage rates included.

It's been too long since my database programming courses, so I'm not sure what the current best practice for this type of entity relationship is. I am open to everything that is best in the long term and would appreciate any expert guidance, especially links to additional resources.

Thanks in advance!

Javascript – Mungoschemata modeling

I am currently building my first web app with a node with Mongodb in the backend. I could use some opinions on the schemes / models I set up.

I have three schemes: user, pet, food. Here is the general relationship of them:

Users have pets -> pets have a list of ingredients and a list of favorite foods -> foods have a list of ingredients.

Here is my user scheme:

const userSchema = new mongoose.Schema({
  name: {
    type: String,
    required: true,
    trim: true,
  },
  password: {
    type: String,
    required: true,
    trim: true,
    minlength: 8,
    validate(value) {
      if (value.toLowerCase().includes('password')) {
        throw new Error('password contains password');
      }
    },
  },
  email: {
    type: String,
    unique: true,
    required: true,
    trim: true,
    lowercase: true,
    validate(value) {
      if (!validator.isEmail(value)) {
        throw new Error('Email is invalid');
      }
    },
  },
  tokens: (
    {
      token: {
        type: String,
        required: true,
      },
    },
  ),
});

userSchema.virtual('pets', {
  ref: 'Pet',
  localField: '_id', // associated with the _id of the user
  foreignField: 'owner', // the name of the field on the other object that creates the relationship, which we set to the owner
});

const User = mongoose.model('User', userSchema);

Here is my pet model:

const Pet = mongoose.model('Pet', {
  name: {
    type: String,
    required: true,
    trim: true,
  },
  badIngredients: (
    (
      {
        Ingredient: {
          type: String,
        },
      },
    ),
  ),
  favoriteFoods: ({ type: mongoose.Schema.Types.ObjectId, ref: 'Food' }),
  owner: {
    type: mongoose.Schema.Types.ObjectId,
    required: true,
    ref: 'User', // reference to the User model
  },
});

and finally the food model:

const Food = mongoose.model('Food', {
  name: {
    type: String,
    required: true,
    trim: true,
  },
  brand: {
    type: String,
    required: true,
    trim: true,
  },
  flavor: {
    type: String,
    required: true,
    trim: true,
  },
  ingredients: (
    {
      type: String,
      required: true,
      trim: true,
    },
  ),
  imagePath: {
    type: String,
    required: true,
    trim: true,
  },
});

I'm not sure if it would be a good idea to have a relationship between the pet's ingredient list and the ingredient's list of food. The general idea is that a user can choose ingredients that their pet is allergic to and filter out the list of foods that do NOT contain those ingredients. You can then assign favorite foods to your pet.

I could also use some advice if I use that userSchema.virtual Pet property is better than simply having a property on my user that is a list of pets.

The food itself is static data that is stored somewhere.

mongodb – Modeling database for commented interview editor

Suppose I want to model a database for a writing platform that animated TV writers can use to script.

I want to save the dialog in a database in which everyone Phrase of dialogue has:

  • A field that identifies the speaking actor
  • A field that specifies the phrase start time
  • A field that specifies the end time of the phrase.dialogue
  • The sentence itself
    • and any formatting applied to it (e.g. heading, bold, italics)

A Script would then be an ordered set of phrases:

({phrase1}, {phrase2}, ..., {phraseK})

The main challenge is: I want to support editing the phrases while preserving the timestamps and undoing the history.

I was thinking of modeling this with a NoSQL database (e.g. Mongo) as a series of operational transformations. However, I am not sure if this is the best approach.

Any advice would be greatly appreciated! Thanks a lot

mathematical modeling – How do I find the unknown variables of a logistic model?

I am trying to adapt a function to the spread of an illness. In this case, the disease is swine flu. So I finally came across the logistic equation: $$ y = frac C {(1 + a (e ^ Bx))} $$
I saw a video explaining how to find the variables when the $ y $-Section is larger than $ 0 $ – by replacing the $ x $ and $ y $ Values. However, there is my initial value $ 0 $I don't really understand how to find these variables. As if I would replace the values $ (0.0) $ for the $ x $ and $ y $, and $ 295,446 $ to the $ C $ What is my limit I would get: $$ 0 = frac {295446} {(1 + A (1))} $$ And that would be impossible to find. Any kind of explanation would be great! Thanks a lot.

Modeling – How do I create a database with metamodel transformations with real data?

I am currently working on the transformation of meta models and the coevolution of models. I have already reached a state of the art, and the next step in my research would be to collect some (ideally many) meta models and their history to analyze how people normally transform them. The ultimate goal will be to develop a solution that helps people develop their metamodels and models more easily.

Currently, however, I'm stuck in the step "Collect metamodels and their history".

My original idea was a google search, which did not take me any further because I could not find any information about the existence of such a database or collection. The second thought I had was looking up GitHub and maybe writing a short program to extract metamodels and their history.
Although this sounds like a great idea (at least to me), I've encountered some difficulties:

  • how to find the metamodels on GitHub
  • Most metamodels have no history (that is, their history is a single initial commit), so they are of no real use to my research
  • how to write an efficient program to download the meta models and their history (provided the other two problems are resolved).

Maybe I haven't seen an obvious solution to my problem, or there is an easier way to implement my "GitHub idea", but I'm currently stuck and any help would be felt. I should state that I don't have access to many resources other than the Internet in the current situation.

Use case – modeling the requirements analysis

I collected data from end users about their requirements from a proposed information system.
I try to model these end-user requirements.
I did a GAP analysis between the features of the current software they are using and their expectations of the new one.
I think that since I've collected data about end-user requirements, scenario-based modeling using use cases would be appropriate.
Please guide me in this regard.
Since the design of the system is outside the scope of my study, is the use case sufficient or do I have to use something more?
Please lead ..

3D Modeling – How to Create a Game Character

Thank you for your response to Game Development Stack Exchange!

  • Please be sure answer the question. Provide details and share your research!

But avoid

  • Ask for help, clarify, or respond to other answers.
  • Make statements based on opinions; Support them with references or personal experiences.

Use MathJax to format equations. MathJax reference.

For more information, see our tips on writing great answers.

Modeling – Doubt – UML – Use Cases

Hello, I would like to express that the use cases "view patient", "plan visit" and "send message" must be registered before they can be executed and executed.
Anyway, I have set the position of the include relationship arrow in two ways and I want to know if one of the pictures is correct or if there is another way to demonstrate this relationship.

Figure 01

Figure 02