relational theory – design problem with transitive dependencies between tables

This is a fictional scenario, but I imagine it may be of general interest, so I'll post it here. Imagine the following business rule:

  1. A course has a grading scale that determines what grades a student can receive for their diploma

How would this BR be implemented (no procedure logic) while preserving 3NF? The DBMS does not support general expressions in CHECK constraints, so we are limited to line expressions

A naive approach would be something like:

create table gradesscales
( gradescale_id int not null primary key );

create table grades
( grade char(1) not null primary key );

create table grades_in_gradescales
( gradescale_id int not null
    references gradesscales (gradescale_id)
, grade char(1) not null
    references grades (grade)
, primary key (gradescale_id, grade)
);

create table courses
( course_code char(5) not null primary key
, gradescale_id int not null
    references gradesscales (gradescale_id)
);

create table diplomas
( student_no int not null
, course_code char(5) not null
    references courses (course_code)
, grade char(1)
    references grades (grade)
, primary key (student_no, course_code)
);


insert into gradesscales (gradescale_id) values (1),(2);

insert into grades (grade) values ('1'),('2'),('3'),('A'),('B');

insert into grades_in_gradescales (gradescale_id, grade)
values (1,'1'),(2,'2'),(1,'3'),(2,'A'),(2,'B');

insert into courses (course_code, gradescale_id)
values ('MA101', 1),('FY201', 2);

So far so good, but nothing prevents us from adding a note from a grading scale that is not related to the course:

insert into diplomas (student_no, course_code, grade)
values (1,'MA101','B');

A pragmatic approach is to add gradescale_id to diplomas and reference grades_in_gradescales instead of grades:

alter table diplomas add constraint C
    add column gradescale_id int not null
    -- drop foreign key against grades
    add foreign key (gradescale_id, grade)
          references grades_in_grade_scales (gradescale_id, grade);

but I'm not very happy about that. Other thoughts?

I could not mark that SQLSo I used relational theory instead

wordpress – Looking for advice on completing a custom MySQL query for 3 different tables

I want to create a query to dynamically populate a table by querying the 3 databases that contain relevant information about their entry.

wp_gf_entry, wp_gf_entry_meta and (ideally) with wp_posts

The query works with the wp_gf_entry and wp_gf_entry_meta tables.

SELECT 
  t2.id, 

  MAX(CASE WHEN t1.`meta_key` = '1.3' THEN t1.meta_value END) firstname,
  MAX(CASE WHEN t1.`meta_key` = '1.6' THEN t1.meta_value END) lastname,
  MAX(CASE WHEN t1.`meta_key` = '5.4' THEN t1.meta_value END) state,
  MAX(CASE WHEN t1.`meta_key` LIKE '%45.%' THEN t1.meta_value END) division

FROM
  `wp_gf_entry_meta` t1 
  LEFT JOIN `wp_gf_entry` t2
    ON (
      t1.`entry_id` = t2.`id` 
    )
WHERE (
    t2.`form_id` = 71
    AND t2.`is_read` = 1 AND t2.`status` = 'active' 
        AND t1.`meta_key` IN ('1.3', '1.6', '5.4', '45.1', '45.2', '45.3', '45.4', '45.5', '45.6')
  )
  GROUP BY t2.`id`
ORDER BY 
    t1.`id` ASC,
    t1.`entry_id` ASC

This works fine IF the entry & # 39; read & # 39; and the entry is not & # 39; discarded & # 39; has been.

+-----+-----------+----------+-------+------------------------------+
| id  | firstname | lastname | state |           division           |
+-----+-----------+----------+-------+------------------------------+
| 166 | Steve     | Jobs     | QLD   | (null)                       |
| 253 | Bill      | Clinton  | NSW   | (null)                       |
| 427 | Maria     | McOldguy | VIC   | Grand Master's (65 and over) |
| 447 | Some      | Bloke    | NSW   | Master's (60-64)             |
+-----+-----------+----------+-------+------------------------------+

However, this query still requires a human to interact with the site several times a day to process entries (marked as read) – as entries are still written to the database, regardless of whether the entry was successfully paid or not.

Here I would like to query the wp_posts table with the key and value pair woocommerce_order_number stored in the wp_gf_entry_meta line for each entry.

So the query changed that the woocommerce_order_number was dragged into the array and what I thought could be queried:

SELECT 
  t2.id, 

  MAX(CASE WHEN t1.`meta_key` = 'woocommerce_order_number' THEN t1.meta_value END) wcID,
  MAX(CASE WHEN t1.`meta_key` = '1.3' THEN t1.meta_value END) firstname,
  MAX(CASE WHEN t1.`meta_key` = '1.6' THEN t1.meta_value END) lastname,
  MAX(CASE WHEN t1.`meta_key` = '5.4' THEN t1.meta_value END) state,
  MAX(CASE WHEN t1.`meta_key` LIKE '%45.%' THEN t1.meta_value END) division

FROM
  `wp_gf_entry_meta` t1 
  LEFT JOIN `wp_gf_entry` t2
    ON (
      t1.`entry_id` = t2.`id` 
    )
WHERE (
t2.`form_id` = 71
    AND t2.`is_read` = 1 AND t2.`status` = 'active'
        AND t1.`meta_key` IN ('1.3', '1.6', '5.4', '45.1', '45.2', '45.3', '45.4', '45.5', '45.6', 'woocommerce_order_number')

  )
  GROUP BY t2.`id`
ORDER BY 
    t1.`id` ASC,
    t1.`entry_id` ASC

The above query gives me a column with the Woocommerce Order ID:

+-----+------+-----------+----------+-------+------------------------------+
| id  | wcID | firstname | lastname | state |           division           |
+-----+------+-----------+----------+-------+------------------------------+
| 166 |    1 | Steve     | Jobs     | QLD   | (null)                       |
| 253 |    2 | Bill      | Clinton  | NSW   | (null)                       |
| 427 |    3 | Maria     | McOldguy | VIC   | Grand Master's (65 and over) |
| 447 |    4 | Some      | Bloke    | NSW   | Master's (60-64)             |
+-----+------+-----------+----------+-------+------------------------------+

However, I want to eliminate the human factor by adding the following to the WHERE statement

SELECT 
  t2.id, 

  MAX(CASE WHEN t1.`meta_key` = 'woocommerce_order_number' THEN t1.meta_value END) wcID,
  MAX(CASE WHEN t1.`meta_key` = '1.3' THEN t1.meta_value END) firstname,
  MAX(CASE WHEN t1.`meta_key` = '1.6' THEN t1.meta_value END) lastname,
  MAX(CASE WHEN t1.`meta_key` = '5.4' THEN t1.meta_value END) state,
  MAX(CASE WHEN t1.`meta_key` LIKE '%45.%' THEN t1.meta_value END) division

FROM
  `wp_gf_entry_meta` t1 
  LEFT JOIN `wp_gf_entry` t2
    ON (
      t1.`entry_id` = t2.`id` 
    )
WHERE (
t2.`form_id` = 71
        AND t1.`meta_key` IN ('1.3', '1.6', '5.4', '45.1', '45.2', '45.3', '45.4', '45.5', '45.6', 'woocommerce_order_number')

        AND t1.`meta_value` IN (SELECT t3.ID FROM wp_posts t3 
                WHERE t3.ID = (wcID)
                    AND t3.`post_status` IN ('wc-processing', 'wc-completed', 'wc-free-of-charge') 
                    AND t3.`post_type` = 'shop_order' )
  )
  GROUP BY t2.`id`
ORDER BY 
    t1.`id` ASC,
    t1.`entry_id` ASC

Result of this error message "Warning: # 1292 Incorrect DOUBLE value truncated: & # 39; wcID & # 39;".

And I'm not sure how to refine this final AND statement.

SQL Fiddle location

Is there a way to query the wp_posts table and the id column with alias wcID woocommerce_order_number?

mysql – What is a good way to clone a database with only specific tables?

My situation:

I have a MySQL database, DB1, which is supplied with new data hourly via a PERL script. I want to create a second database, DB2, which is a copy of DB1, except that only one of the tables in DB1 exists. DB2 must be updated either hourly or every time DB1 is updated (whichever is easiest to implement).

What would be a good way to do this? I'm mainly looking for simple ideas that do not take too long to implement, but I'm open to all your ideas.

Additional information:
– DB2 is used to connect to a Tableau dashboard.
– DB1 has a terrible performance and will be scrapped in the near future.

mysql – Group results from two different tables at the same hours

I have two simple tables:

Inside-

id | timestamp | temp | humi

outside

id | timestamp | temp

and two selections that group the time and average temperature of the last 24 hours after the same hour:

SELECT DATE_FORMAT(timestamp, '%H:00') AS time, round(avg(temp), 1) as avg_out_temp
FROM outdoor
WHERE timestamp >= now() - INTERVAL 1 DAY
GROUP BY DATE_FORMAT(timestamp, '%Y-%m-%d %H')
ORDER BY timestamp ASC;

SELECT DATE_FORMAT(timestamp, '%H:00') AS time, round(avg(temp), 1) as avg_in_temp
FROM indoor
WHERE timestamp >= now() - INTERVAL 1 DAY
GROUP BY DATE_FORMAT(timestamp, '%Y-%m-%d %H')
ORDER BY timestamp ASC;

and now I have to group these two results after the same hour, in terms of the possibility that there will be no records in it Inside- or outside Table for a whole hour, so I have to get:

time | avg_out_temp | avg_in_temp
11:00 | 12.5 | 21.4
12:00 | 13.9 | null
13:00 | null | 22.4
14:00 | 14.0 | 22.5

There must be a cross-join between tables in Python

I have 2 files, I have linked to them based on different combinations. For example, suppose table A has 10 columns and table B has 5 columns. I want to create a match so that I get all tables from Table A and only one specific column from Table B. So when I do all my different links later, I want to combine the different data frames into one.

I have 2 files, I have linked to them based on different combinations. Match file A to file B using left joins:
1st entry: first name, last name, date of birth
2. Join: first name, last name, last name4SSN
3. Joining: Full SSN

I saved the results of these merges in three different panda structures final1, final2, and final3. If I could somehow only get certain columns from file B in my left join, then my job would be easier because all the columns from file A and from file B are returned when merging.

If I can somehow achieve something like what we do in SQL:

Choose a. *, b. "MRN", b. "ABC" from Table 1
Left connection
Table 2
on
a. "First name" = b. "First name" and a. "Last name" = b. "Last name" and a. "DOB" = b. "DOB"
or
a. "First name" = b. "First name" and a. "Last name" = b. "Last name" and a. "Lastname4SSN" = b. "Nachname4SSN"
or
a. "SSN" = b. "SSN"

Any help in fixing or improving my code would be greatly appreciated as I start with Python. Many thanks.

**CODE**
# Importing libraries and loading csv data
import time
import numpy as np
import pandas as pd

start=time.perf_counter()

filea=pd.read_csv("filea.csv",na_filter=False)
filea=filea.replace('',"NULLS")
print(filea)

fileb=pd.read_csv("fileb.csv",na_filter=False)
fileb=fileb.replace("NULL",'')
fileb=fileb.replace('',"EMPTY")

#Trimming Data
#fileb('SSN')=fileb('SSN').fillna(7777)
#fileb('SSN')=fileb('SSN').round(0).astype(int)
#fileb('SSN')=fileb('SSN').astype(str)
#fileb('SSNnew')=fileb('SSN').replace('.0','')


filea('SSN')=filea('SSN').str.strip()
filea('Recipient First Name (from Eligibility)')=filea('Recipient First Name (from Eligibility)').str.strip()
filea('Recipient Last Name (from Eligibility)')=filea('Recipient Last Name (from Eligibility)').str.strip()
filea('DOB')=filea('DOB').str.strip()
fileb('SSN')=fileb('SSN').str.strip()
fileb('FirstName')=fileb('FirstName').str.strip()
fileb('LastName')=fileb('LastName').str.strip()
fileb('DOB')=fileb('DOB').str.strip()

# Creating a new SSN after removing bad SSN's

filea('SSNNEW')=''

for i in range(len(filea('SSN'))):
    if filea('SSN')(i) == '999999999':
        filea('SSNNEW')(i)="InvalidSSN"
    elif filea('SSN')(i) == '977000000':
        filea('SSNNEW')(i)="InvalidSSN"
    elif filea('SSN')(i) == '988888888':
        filea('SSNNEW')(i)="InvalidSSN"
    elif filea('SSN')(i) == '899999999':
        filea('SSNNEW')(i)="InvalidSSN"
    elif filea('SSN')(i) == '000000000':
        filea('SSNNEW')(i)="InvalidSSN"
    else:
        filea('SSNNEW')(i)=filea('SSN')(i)

# New Column add for last 4 digits of SSN

filea('last4ssn')=filea('SSNNEW').str(-4:)
fileb('last4ssn')=fileb('SSN').str(-4:)
filea('newssn')="NULL"

# Cleaning and Trimming Data

# For Filea with SSN > 7 DIGITS

for i in range(len(filea('SSNNEW'))):
    if len(filea('SSNNEW')(i)) > 7:
        filea('newssn')(i)=filea('SSNNEW')(i)
    else:
        filea('newssn')(i)="SSN<8"


#  For Filea with SSN >= 3 DIGITS
for i in range(len(filea('last4ssn'))):
    if len(filea('last4ssn')(i)) >= 3:
        filea('last4ssn')(i)=filea('last4ssn')(i)
    else:
        filea('last4ssn')(i)="SSN<3"


filea=filea.rename(columns={'SSN':'SSNOLD'})
filea=filea.rename(columns={'newssn':'SSN'})
filea=filea.rename(columns={'Recipient First Name (from Eligibility)':'FirstName'})
filea=filea.rename(columns={'Recipient Last Name (from Eligibility)':'LastName'})



filea('FN')=filea('FirstName').str(0:4)
fileb('FN')=fileb('FirstName').str(0:4)
filea('LN')=filea('LastName').str(0:4)
fileb('LN')=fileb('LastName').str(0:4)

filea('New_ID') = filea.index + 0

merge1 = filea.merge(fileb, how ='inner', on= ('FN','last4ssn','LN'))
merge2 = filea.merge(fileb, how ='inner', on= ('FN','DOB','LN'))
merge3 = filea.merge(fileb, how ='inner', on= ('SSN'))

merge1=merge1.drop_duplicates()
final1=merge1.drop_duplicates(subset=(('New_ID')), keep='first')

merge2=merge2.drop_duplicates()
final2=merge2.drop_duplicates(subset=(('New_ID')), keep='first')

merge3=merge3.drop_duplicates()
final3=merge3.drop_duplicates(subset=(('New_ID')), keep='first')

matchedfilea = filea.merge(final1, how ='left', on= ('New_ID'))

matchedfileb = matchedfilea.merge(final2, how ='left', on= ('New_ID'))

matchedfilec = matchedfileb.merge(final3, how ='left', on= ('New_ID'))

finalMatchOne = matchedfilec(('New_ID','Pt MRN_x','Pt MRN_y','Pt MRN','Attribution Category_x', 'Recipient Medicaid ID (Original)_x',
       'Recipient Medicaid ID (Current)_x', 'Patient Account Number_x',
       'SSNOLD_x', 'LastName', 'FirstName',
       'Recipient Middle Initial (from Eligibility)_x', 'DOB_x_x',
       'Gender (from Eligibility)_x', 'SSN (from Claims)_x',
       'Recipient Full Name (from Claims) _x',
       'Preliminary Prospective Indicator_x', 'SSNNEW_x'))


print(finalMatchOne)

finalMatchOne = finalMatchOne.replace(np.nan, 'NULL', regex=True)
print(finalMatchOne)

I have tried to do this to the best of my knowledge, but I seem to be lost because I can not accumulate the results as in SQL.

Architecture for business functions that affects multiple data objects / database tables

I am trying to create a sample project web API to see how "clean" I can recreate the Delphi (Pascal) API that we develop in my job.

I have created a solution which now contains 3 different projects.

  • WebApi (the main application logic interface)
  • Object Library (Models)
  • DataAccess (repository style data access layer)

If I want to keep my business logic as separate as possible, where should I put the following logic?

  • I have a person who can exist with or without one employment,
  • A person can have 0..n Employment
  • If a person is employed, this can affect other business objects, such as:
    like his OvertimeAccount. VacationAccout etc. what should be
    created and preserved throughout his employment.

I could use one PersonDataController somehow, but that would leave me with one DataController they would have to be closely linked to objects Person. Employment. VacationAccount – Besides, this would mean that mine PersonRepository would be too dependant on my EmploymentRepository and possibly others.

Another approach (and possibly the simplest) is to keep the business logic in the DataObject Person so I could call Person.Hire(); Which makes the most sense for me, but the problem remains that mine Hire Function must be dependent on the employment Object and mine PersonRepository would be dependent on mine EmploymentRepository,

question

Where would I put the business function Hire(Person) in a way that avoids tight coupling of my data objects and repositories?

Export data from DHIS2 with link between parent and child tables?

We try to export a "parent table" with their "child table", if we can call it that.
Both are linked by a relationship type. When data is added to the parent table, the user can switch to the Relationship tab and add more data, which works very well.

The problem is with the export. The data is exported separately without linking or linking between them. We use an online server for DHIS2, so we do not have direct access to the database structure.

If this issue is resolved, we can use DHIS2 to reach the final implementation level. And the connection between these two tables is sensitive.

How can data from both tables be exported with a join / key connection between both tables?

postgresql – Tables change when inheritance changes

I have two tables – & # 39; jobs & # 39; and & # 39; jobs_history & # 39; and for some reason inherits & # 39; jobs & # 39; from & # 39; jobs_history & # 39 ;. I want to change the inheritance so that "jobs_history" inherits from "jobs". Because the columns are the same, there are some triggers and constraints that are not inherited.

ALTER TABLE jobs NO INHERIT jobs_history;
ALTER TABLE jobs_history INHERIT jobs;

and this works flawlessly. The only thing is that the data has been moved too. All data in jobs_history is now in jobs and vice versa.

Does anyone have an idea what's going on?

dhis2 – Parent and child tables are exported without a join key, although there is a relationship.

We try to export a "parent table" with their "child table", if we can call them that.
Both are linked by a relationship type. When data is added to the parent table, the user can switch to the Relationship tab and add more data, which works very well.

The main problem is the export. The data is exported separately without linking or linking between them. We use an online server for DHIS2, so we do not have direct access to the database structure.

If this problem is resolved, we are ready to use DHIS2 for the final phase of the implementation. And the connection between these two tables is sensitive.

How can data from both tables be exported with a join / key connection between both tables?

PHP – How can I combine 3 tables (inputs, outputs, returns) to show a summary of the movements of a product?

I'm trying to create an inventory system with Laravel 5.
8th.
These are some tables:

Products, tickets, departures, returns.

What I want to do is sort of a table (in view) where all the movements of a particular product (inputs, issues, and returns) are sorted by date. Here is an example of what I intend to do
Enter image description here

I'd like to create UNION between the tables (entries, exits, and returns), but I do not know how to find the name of the table to distinguish where each record comes from, and I know if it's an entry or an exit or a return to insert it into the Operation column.

Maybe I should create a new table & # 39; Movements & # 39; in the schema?

Thanks in advance for your help.