azure ad – Access Denied error After migrate on-prem Local AD SP Database to Sharepoint Farm joined to AzureAD DS

I have migrated Sharepoint 2013 Farm which is joined to Local AD DS to Sharepoint 2016 Farm joined to AzureAD DS.

Client wants to take full use of AzureAD DS and want to migrate sharepoint farm to Azure VMs connected to AzureAD DS.

I created a Sharepoint 2016 farm. created normal webapp/Site collection and is able to access as expected.

When I attach sharepoint 2013 DB after upgrade it gives access denied to those upgrade web app.
Do I need to do any changes to existing usernames as they has been now talking to AzureAD DS instead of Local AD.

database design – Best practices when designing SQL DB with “redundant” tables

I have a design dilemma for a DB I’m creating for an e-commerce platform I want to develop.

I have 3 different actors interacting with the website:

  • Customer
  • Manager
  • Supplier

Those 3 actors have the same table structure: email, username, address…

My initial design for the DB was to create a single table (StoreUser), with an additional field to distinguish between the 3 different types of actors.

The issue I see with this design is that when referencing a Customer in the “Order” table for instance, to assign an Order to a Customer, it would be technically possible to assign a “Manager” or a “Supplier” to the order, even though it isn’t wanted. Same thing for the “Product” table, to which a Supplier’s foreign key should be provided, the “StoreUser” FK would not distinguish between the 3 actors.

On the other hand, creating 3 tables, containing the exact same datafields, seems really redundant, especially from the code perspective (I’m using Django for the website, and I really don’t like the idea of having 3 different classes with the same structure.

Which one seems the most logical to you? What’s the good practice here?

Simple database front-end for simple small business

I run a small language school and I have a database with language students’ details: names, contact details, ages, lesson times, dob, fees, fee due dates and the like.

I set up a MySQL online database years ago, wrote php/html pages (from a book) and uploaded them to the server. I use these to input/edit student details, move leavers to another table (I realise now that isn’t necessary), get various lists (next month’s birthdays, fees, lesson registers, etc.) and show reports such as fees due each month, students graduating their normal schools next April, etc. It also archives student numbers according to their school grades at the end of each month (which is the 3rd table in my set-up).

So basically the equivalent of spreadsheets with various reports available at the click of a button.

However, the latest PHP update has me giving up trying to learn how to update all those files and I’m resigned to letting it all go once my hosting company switch off the current PHP version. So I’m looking for a simple front-end that I can use to replace this. I looked at LibreOffice Base and it just seems overkill and a time sink for what I want. I was suicidal after getting through the first chapter of the manual. I’m busy running the school and don’t have the time, intelligence nor inclination to be a programmer, too…
SEMrush

So does anyone have any suggestions what my best plan of action would be? Two of us use the db, not always in the same place, so I want to continue with everything online. Oh, and we use Macs.

 

Optional filters in relational database

I’m trying to create a procedure in MySQL/MariaDB with optional filters for each participating table. Assuming the tables are $A, B, C, ldots$, each table may or may not be filtered, and the result is that of each table having their respective optional filters applied. $A$ and $B$ have the relation A (1) -> (*) B, and similarly, B (1) -> (*) C and so on.

With that, I’ve thought of the following function to filter & join each pair of tables:

/*
  F(): filter
  L: left side
  R: right side
*/

if(L.filter is NULL and R.filter is NULL) {
  return fullJoin(F(L), F(R))
} else if(L.filter not NULL and R.filter is NULL) {
  return leftJoin(F(L), F(R))
} else if(L.filter is NULL and R.filter not NULL) {
  return rightJoin(F(L), F(R))
} else { // both not NULL
  return innerJoin(F(L), F(R))
}

The filter function F() would be something like SELECT * FROM table WHERE input == NULL or input == column. The function is first applied to 4 tables as follows:

fn(fn(fn(A,B),C),D)

where the result of said function fn is considered to have a not NULL filter.

Additionally, I think the function fn can be packed into the following:

SELECT * FROM L FULL JOIN R 
WHERE (R.input == NULL || R.input = R.column)
AND (L.input == NULL || L.input = L.column)

Is my function correct?

database design – Getting sums of multiple leveled relations efficiently

I’m currently building an API and a web app for an internal warehouse system using NET Core.
I have the core entity structure, that goes like this:

“Material” has many “MaterialSubtypes” has many “MaterialClasses” has many “Packs”.

Now, I need to create a list, representing a single sale. It can include lots of packs of different materials. The user should be able to add or remove packs to a sale as it is being prepared.
The problem is that I also need to show the user the list of all materials hierarchy that the sale contains, as well as the sum of “Quantity” field of all packs on each sublevel. This quantity is supposed to be updated dynamically in a client app.

What is the most efficient way to do this? Should I just add all packs to a sale, and then, on every GET request, Include everything and recalculate all sums via foreach loops? Or should I create separate entities for Material->MaterialSubtype->MaterialClass within the Sale and update them each time a Pack is added?
None of that seems optimal, but I can’t think of anything else.

Best database design for performing binary analysis

I have some binary data that I would like to do some statistical analysis on. Let’s say I have a few thousand 32 bit values. I want to be able to efficiently do things such as calculate parity bits or look at how often a certain bit is a 1 or 0. My first thought is to put that data into a relational db such as postgres. However I’m not sure what would be the best schema for this type of data. I could:

a] store each bit as a column of type boolean
b] store the 32 bit value as one column of type bit string
c] store the 32 bit value as a hex encoded string

Option a seems like it would allow me to do direct queries such as count how many times bit 23 is 1 easily using direct SQL. However, is this an inefficient way to store the data, since each bit takes one byte of space in the db? What is the best data type to use to most efficiently store and perform these types of calculations on binary data?

testing – How to simulate test data to a database?

The best way to get the data for test in is the same way that it will get in in production, because its the most realistic.

So the question here is how exactly does this third part software get its data in? inserts, a sproc, an API, SSIS? use the same method to insert your test data.

If you don’t know or cant tell, then you could run the tool and check the database log, or monitor the network traffic. perhaps you can even rerun the transactions

8 – When old field revisions should be deleted from the database?

I have a question about old fields revisions in Drupal 8. For example I have “Full Title” field in different paragraphs. I have deleted this field from one of the paragraphs, for example from the “Test Paragraph”. In the database there are tables: paragraph__field_section_full_title and paragraph_revision__field_section_full_title. In these tables all of the records for test_paragraph bundle deleted column values changed from 0 to 1.

After the field_cron() that should delete these records all of the records from paragraph__field_section_full_title was deleted as it expected.

But in the paragraph_revision__field_section_full_title table old revisions for test_paragraph bundle whith deleted column = 1 is not deleted. When they should be deleted? Is it automatic process or not? Where I need to look to check if abything broken?

Thanks.

Here screenshot below of the table of not deleted old revisions:

enter image description here

database – Magento 2.4.1 EE | Customer custom attribute created from admin but not saving value in DB

I am facing an issue with Magento2 EE website (Magento ver. 2.4.1) for customer custom attribute.

Found similar question here as well but not with any solutions.

When I create a new customer attribute in admin, attribute created and visible it in admin area but value is not saving. I noticed that the attribute data is not getting saved in the eav_entity_attribute table.

How can I fix it, please suggest.