oracle – How to increase the value of a column in a table by a constant by consulting another table?

I have created the database of the college as

create table depts(
deptcode char(3) primary key,
deptname char(70) not null);

create table students(
rollno number(2) primary key,
name char(50),
bdate date check(bdate < TO_DATE('2004-01-01','YYYY-MM-DD')),
deptcode char(3) references depts(deptcode)
on delete cascade,
hostel number check(hostel<20),
parent_inc number(8,1));

create table faculty(
fac_code char(2) primary key,
fac_name char(50) not null,
fac_dept char(3) references depts(deptcode)
on delete cascade);


create table crs_offrd(
crs_code char(5) primary key,
crs_name char(35) not null,
crs_credits number(2,1),
crs_fac_cd char(2) references faculty(fac_code)
on delete cascade);

create table crs_regd(
crs_rollno number(2) references students(rollno) on delete cascade,
crs_cd char(5) references crs_offrd(crs_code)
on delete cascade,
marks number(5,2),
primary key(crs_rollno,crs_cd));

Now I would like to give the students, who have achieved less than 50 points in this subject, the grade 5 in the subject "DBMS".

Accordingly, I wrote the query as:

update r
set r.marks=r.marks+5
from crs_regd r
inner join crs_offrd o
on r.crs_cd=o.crs_code
where o.crs_name='DBMS' and r.marks<50;

but it shows errors as SQL command not properly ended in the line 3, I use Oracle SQL. What is the problem in this query?

pandas – How to fill in the missing values ​​of a column in which both the first and the last value are missing?

I've tried to fill the missing values ​​with bfill / ffill when using ffill, the first line is missing and when i use bfill, the last values ​​are missing. How can I fill all missing values ​​with bfill / ffill?
thank you in advance

pd.DataFrame((('Eagle River','NaN','light'),
         ('Ybor','Red','oval'),
          ('Holyoke','blue','oval'),
         ('Abilene','NaN','disk')),columns=('City','Colors','Shape'))

Java – Lazy Initialization Exception with 2 entities with the same OnetoMany Lazy Load column in session

I get one Lazy Initialization Exception for a onetomany associated column in idle state. It is not the usual session not found problem or eager and lazy on . Please go through the scenario below

I have a Basic Entity class with me discriminator column specified therein. There are two child entity classes for different discriminator values.

base class

@DiscriminatorColumn("cust_type") 
@Table(name = "CUSTOMER_PROFILE") 
Customer
{

@OneToMany(fetch = FetchType.LAZY, cascade = { CascadeType.ALL }) @JoinColumn(name = "common_id", referencedColumnName = "common_id") 
private Set CommonIdentifier = new HashSet();

}

Child class 1 with discriminator value TYPE 1

@Entity(name="CustomerType1") 
@DiscriminatorValue("TYPE1") 
CustomerType1 extends Customer
{
@ManyToOne(optional = true) @JoinColumn(name = "type2_id") private CustomerType2Entity customerType2; // fetch eager
}

Child class 1 with discriminator value TYPE 2

@Entity(name="CustomerType2") 
@DiscriminatorValue("TYPE2") 
CustomerType2 extends Customer
{


}

Here the CUSTOMER with discriminator Type 1 has a join column type2_id which is actually referenced to the same table but to one Customer with different discriminator value.

Even in the base class, there is one onetomany pillar common_id ,

Now there is a scenario in which the customer Type 1 pictured on Type 2 both have the same common_id which is lazy set. Then I'll get Type1 Customer automatically Type2 will also pick.

Everything works fine if there are different common_id for these 2 customers. But if both have the same common_id, then if I load Type1.CommonIdentifier lazy, I get a lazy initialization exception because there are 2 entities with the same collection with the same ID there hibernate Session,

How can I solve this problem? A redesign is not possible table or do CommonIdentifier Eager as a legacy application.

Ask for advice.

The Google Sheets data feed has additional column headings in JSON

I have a customer who uses Google Sheets as a data feed. When using the correctly structured URL for the sheet, blank columns appear in the target table. When I look in the Code inspector, I see gsx column heading properties that do not exactly match the column headings.

For example, in the table I'm looking at, the first column is Case Number. This column is empty in the destination table, even though it should be filled with case numbers. When I look in the code inspector (Google Chrome), I open the first row of data in the JSON object and see a key named & gt; gsx $ casenumber & # 39 ;. This key has the property & # 39; undefined & # 39 ;.

In the Google Sheet itself, the case numbers fill this column.

Strange is that the key is # gsx $ casenumber & # 39; also for a new table the property $ t of & # 39; undefined & # 39; Has. The column heading in the Google Sheet is "case number".

The next key in the Google Sheet JSON object is gsx $ caseprefixtoeachcaseisd202cv. This is the key where the $ t properties have the case numbers that I should see in the rendered table. This also happens in other columns.

Is there a way to either remove the first Google Sheet key in JSON or bind the key with the case numbers to the appropriate column heading in the rendered table?

Any input or help would be very grateful. Thank you in advance!

Screenshot with details of the rendered table and the JSON object

SQL Server – Check the BIT column

I have the following table:

  • I would – unique number for each user. There will not be more than 2 ^ 63-1 User. (Automatically increased)
  • username – The user's unique ID can not be longer than 30 characters (Non-Unicode). (Required)
  • password – The password must not be longer than 26 characters (Non-Unicode). (Required)
  • profile pic – Image with a size up to 900 KB,
  • Last Login Time
  • Is deleted – Indicates whether the user has deleted his / her profile. Possible states are true or not correct,

This is my SQL query:

CREATE TABLE Users (
Id INT PRIMARY KEY IDENTITY,
Username VARCHAR(30) NOT NULL,
(Password) VARCHAR(26) NOT NULL,
ProfilePicture VARBINARY(MAX) CHECK (DATALENGTH(ProfilePicture) <= 900000),
LastLoginTime DATETIME,
IsDeleted BIT
)

Is there any way to do the column & # 39; IsDeleted & # 39; to check (to validate)? (whether it is true or false)

List Manipulation – How are tables that have different lengths for the same column values ​​that exist in both tables, best linked?

I think my problem is pretty simple, and in SQL this would be trivial. I have two tables

TableOne = {{a, x1}, {b, x2}, {c, x3}};
TableTwo = {{a, y1}, {c, y2} , {a, y3}, {a, y4}, {b, y5}, {c,y6}, {c, y7}}

I want to be able to join these two tables, with the values ​​of column 1 in both tables matching so that:

DesiredResult =  {{a, x1, a, y1}, {c, x3 , c, y2} , {a, x1, a, y3}, {a, x1, a, y4}, {b, x2, b, y5}, {c, x3, c, y6}, {c, x3, c, y7}}

I tried both Select[] Statements within one Table[] Structure and looked in too JoinAcross[] but have not been able to achieve the desired effect. In SQL, it would be easy to something like:

SELECT Col1.Table1, Col2.Table1, Col1.Table2, Col2.Table2,  FROM table2 INNER JOIN table1 ON Col1.Table1  = Col1.Table2

Or something similar.

sharepoint online – The default value in the mandatory column is not displayed

I created a content type in a library. Created a template for the library because hundreds of other libraries need to be created. There is one field in the content type that is mandatory, and library by library must be changed to the default value.

In Word, the property sheet on the side of the document does not display this default value while clicking "File" (Backstage view of the document). In fact, the document can be saved even though it is not in the Frontal Properties window.

This annoys me for weeks, as I have already checked the column in the content type in the library and the default value is displayed. I checked the settings for the column default values ​​and they are there. To be 200% sure, I even add them again.

I can not put the default value in the original content type (site column) because it changes the library library and is set when the library is created.
I even checked the default Word template.dotx in forms and this template contains the default value.

There is no problem with Excel and PowerPoint, because these 2 do not have the property sheet on the page, but they take you to the "File" view where the default value exists.

Can I try something else?

python – storing numeric data arrays (np.ndarrays, lists) in a SQLite database column using SQLAlchemy

I want to store numpy.ndarray s and lists of float values ​​in a relational database created using SQLAlchemy with SQLite as the dialect. What is the best way to do this without breaking the array, converting it to text (Python inserts a numpy array into a sqlite3 database), or using a similar technique that does not allow these arrays to be queried directly ?

My original idea was to create a one-to-many relationship as follows:

class Aerosol_Grand_Class(base_object): 
    __tablename__= 'Aerosol_Grand_Table'

    Date= Column(String, primary_key= True)
    AE_Profile= relationship('AE_Class',back_populates= 'AE_relation_def_param')
    AOD_Profile= relationship('AOD_Class',back_populates= 'AOD_relation_def_param')

    def __repr__(self):
        return ""%(self.num_shot,
                                                                     self.time,
                                                                     self.param)

class AE_Class(base_object):
    __tablename__= 'AE_Table'
    ID= Column(Integer, primary_key= True)
    Date= Column(String, ForeignKey('Aerosol_Extinction_Grand_Table.Date'))
    Time= Column(Integer)
    Altitude= Column(Integer)
    AE= Column(Integer)

    AE_relation_def_param= relationship('Aerosol_Grand_Class', back_populates= 'AE_Profile')

    def __repr__(self):
        return ""%
        (self.Date,self.Time, self.Altitude)

where a "date" in the Aerosol_Grand_Table corresponds to many records in the AE_Table, Now I have integer lists / numpy.ndarray s for saving AE_Tablein each case the columns time, height and AE. I wanted to do something like:

time_array= np.arange(0,20); altitude_array= np.arange(0,1000,10);
ae_objects_to_add= ()

ae_objects_to_add+= (AE_Class(Time= i) for i in time_array)
ae_objects_to_add+= (AE_Class(Altitude= i) for i in altitude_array)

etc. Then I will add them over session.add_all(ae_objects_to_add), I want to be able to retrieve those arrays from the database and restore them as they existed in Python before they just persist.

I suspect that this is not the best (or even a good) way to store the arrays in SQLAlchemy through SQLite. If not, what should I do? I think creating a custom data type class where the arrays can be stored is the best route. Saving lists / tuples to a SQLite database with SQLalchemy may be a good source for a custom column data type for something like this, but I do not know how to customize the case on this question.