oracle – Why am i getting “missing right parenthesis” error when i try to LOG ERRORS when loading from an external table?

I’ve successfully created an error logging table

BEGIN
    DBMS_ERRLOG.create_error_log(
    dml_table_name  => 'enzyme',
    skip_unsupported => TRUE);
END;
/

desc ERR$_ENZYME;
Name            Null? Type           
--------------- ----- -------------- 
ORA_ERR_NUMBER$       NUMBER         
ORA_ERR_MESG$         VARCHAR2(2000) 
ORA_ERR_ROWID$        UROWID         
ORA_ERR_OPTYP$        VARCHAR2(2)    
ORA_ERR_TAG$          VARCHAR2(2000) 
ENZ_NAME              VARCHAR2(4000) 

But i get an error when I try to run this query:

insert /*+ ignore_row_on_dupkey_index ( enzyme ( enz_name ) ) */
into enzyme
SELECT enz_name FROM EXTERNAL ((
  construct_id NUMBER(10),
  n_term VARCHAR2 (50),
  enz_name VARCHAR2 (3),
  c_term VARCHAR2 (50),
  cpp VARCHAR2 (50),
  mutations VARCHAR2 (50),
  mw_kda NUMBER (7, 3))

    TYPE ORACLE_LOADER
    DEFAULT DIRECTORY data_to_input
    ACCESS PARAMETERS (
        RECORDS DELIMITED BY NEWLINE
        skip 1
        FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
        MISSING FIELD VALUES ARE NULL 
        ) 
    LOCATION ('CONSTRUCT.CSV')
    LOG ERRORS INTO ERR$_ENZYME ('INSERT') REJECT LIMIT UNLIMITED) ext
    where not exists (
        select * from enzyme e
        where e.enz_name = ext.enz_name
    );
Error at Command Line : 79 Column : 5
Error report -
SQL Error: ORA-00907: missing right parenthesis
00907. 00000 -  "missing right parenthesis"
*Cause:    
*Action:

Line 79 is the LOG ERRORS INTO line.

If i delete the LOG ERRORS INTO ERR$_ENZYME ('INSERT') part, this command functions perfectly.

postgresql – Delete rows on a table with cascading foreign keys

First, it depends on how your foreign keys are declared. Assuming tables like:

CREATE TABLE parent
( pid ... not null primary key
, ...
);

CREATE TABLE child
( ...
, pid ... not null
    references parent (pid)
        on delete <action>
        on update ...
...
);

action can be any of:

  • NO ACTION
    Produce an error indicating that the deletion or update would create a foreign
    key constraint violation. If the constraint is deferred, this error will be
    produced at constraint check time if there still exist any referencing rows.
    This is the default action.

  • RESTRICT
    Produce an error indicating that the deletion or update would create a foreign key constraint violation. This is the same as NO ACTION except that the check is not deferrable.

  • CASCADE
    Delete any rows referencing the deleted row, or update the value of the referencing column to the new value of the referenced column, respectively.

  • SET NULL
    Set the referencing column(s) to null.

  • SET DEFAULT
    Set the referencing column(s) to their default values.

See https://www.postgresql.org/docs/9.2/sql-createtable.html

If your foreign keys are declared as “on delete cascade” it is – in theory – sufficient to delete the root node. In practice, there may be physical limitations that restrict the total number of rows that can be deleted in one transaction.

If you want to experiment with the different actions you can use Fiddle. 9.5 is the oldest one available. If you are still on 9.2, consider upgrading to something more modern.

t sql – Space Calculation for SQL Server Table: What’s wrong with my query?

I’m using the following query in order to calculate some space related measurements on a particular table in my SQL Server database:

SELECT 
    t.name AS TableName, 
    p.rows,
    (sum(a.total_pages) * 8) as reserved,
    (sum(a.data_pages) * 8) as data,
    N'Not Needed' as index_size,
    (sum(a.total_pages) * 8) -  (sum(a.used_pages) * 8) as unused
FROM 
    sys.tables t
INNER JOIN      
    sys.indexes i ON t.object_id = i.object_id
INNER JOIN 
    sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
INNER JOIN 
    sys.allocation_units a ON p.partition_id = a.container_id
WHERE t.name = N'XYZ' 
GROUP BY t.name, p.rows
ORDER BY 
   1 Desc;

It gives me correct results for calculating number of rows, reserved space, and unused space. However, when I compare its output with the output from running sp_spaceused stored procedure, I observe a different value for space used by data:

sp_usedspace vs my query

How can I fix it?

How do I prevent text from different table from appearing on a picture html email code

So I place a picture in first table to position absolute to have text appear above it but the problem I’m having now is the table below text is also appearing on the image. How Do I prevent this from happening ?

        <tr>
          <td>
            <table width="100%" cellspacing="0" cellpadding="0" border="0" style="">
              <tr>
                <td>
                  <img src="https://stackoverflow.com/img/suit1.jpeg" width="590px;" height="500px;" style="position:absolute">
                  <h1>each</h1>
                  <button>SHOP Now</button>
                </td>
              </tr>
            </table>
          </td>
        </tr>
        <!-- end of row 3 -->
        <!-- start of row 4-->
        <tr>
          <td>
            <table width="100%" cellspacing="0" cellpadding="0" border="0" style="">
              <tr>
                <td>
                  <h1>hello</h1>
                </td>
              </tr>
            </table>
          </td>
        </tr>

MariaDB subqueries to same table and column resulting in several columns

I have a table and want to pick monthly minute data to compare column wize in 10.3.13-MariaDB

Tested and tested for hours and hours different approaches without success, one example is below. Some tests does not complain syntactically but takes forever, and some tests complains about column names not recognized. All subqueries if tested separately return the same number of records, each in one column.

`SELECT RD, OT1, OT2, OT3 FROM

(SELECT rdate from OO where month(rdate) = 7 and year(rdate) = 2006) AS RD,

(SELECT ot from OO where month(rdate)=7 and year(rdate) =2006) AS OT1,

(SELECT ot from OO where month(rdate)=7 and year(rdate) =2007) AS OT2,

(SELECT ot from OO where month(rdate)=7 and year(rdate) =2008) AS OT3;`

The result should be something like:

`RD OT1 OT2 OT3

2006-07-01 00:00:00 1.2345 2.1234 1.543

… … … …

2006-07-31 23:59:00 3.456 3.234 2.234`

And, no I dont want to use UNION because then they will still follow one after the other…

Any thoughts?!

magento2 – Auto Increment column do not have index. Column – “value_id”, table – “catalog_product_entity_text”

I’m getting the following error after i upgrade magento version to 2.3.5 and try to run php bin/magento setup:upgrade

Auto Increment column do not have index. Column – “value_id”, table –
“catalog_product_entity_text”

I tried to add an index as follow in catalog_product_entity_text table but still get the same error message.

Action  Keyname Type    Unique  Packed  Column  Cardinality Collation   Null    Comment
Edit Edit   Drop Drop   value_id    BTREE   Yes No  value_id    1524    A   No  

magento2 – Render sales email items table into variable

I need to pass the content of the order items table from the new order email to an external service. In Magento 1 I used this approach and it worked fine:

      $appEmulation = Mage::getSingleton('core/app_emulation');                                                                  
      $initialEnvironmentInfo = $appEmulation->startEnvironmentEmulation($order->getStoreId());                                  
      $layout = Mage::getModel('core/layout');                                                                                   
      $layoutUpdate = $layout->getUpdate();                                                                                      
      $layoutUpdate->load('sales_email_order_items');                                                                            
      $layout->generateXml();                                                                                                    
      $layout->generateBlocks();                                                                                                 
      $items = $layout->getBlock('items');                                                                                       
      $items->setOrder($order);                                                                                                  
      $orderItemsHtml = $items->toHtml();                                                                                        
      $appEmulation->stopEnvironmentEmulation($initialEnvironmentInfo);            
                                                                                   
      return $orderItemsHtml;       

I use this approach instead of rendering the block directly because various extensions extend the layout that is used to create that table.

I’m trying to port this to M2 and struggle to get access the order block.

I tried various versions but the layout never seems to be loaded. I.e.

public function __construct(                                                     
    MagentoFrameworkViewResultPageFactory $pageFactory                    
)                                                                                
{                                                                                                                                                   
    $this->pageFactory = $pageFactory;                                       
}

...

protected function getOrderItemsHtml($order)      
      /** @var MagentoFrameworkViewResultPage */                          
      $page = $this->pageFactory->create(MagentoFrameworkControllerResultFactory::TYPE_PAGE);
      $page->addHandle('sales_email_order_items');                             
      $blocks = $page->getLayout()->getAllBlocks();                             
      var_dump(array_keys($blocks)); die;       
}

This will output:

array (size=1)
  0 => string 'messages' (length=8)

If anyone has any idea what I’m missing or if anyone can point me to an alternative approach for this I would be very grateful. Thanks!

express edition – SQL Server 2016: 313 MB available in the database but table size cannot grow

Yesterday, the below error was reported:

Could not allocated space for object ‘dbo.X’.’Y’ in database ‘Z’ because the ‘PRIMARY’ filegroup is full

After deletion of some records from the table by an engineer, the error was cleared. I couldn’t check the details yesterday as I actually didn’t have admin access to it. My access was sorted later on. As I checked the DB size today, I observed the below:

enter image description here

There is 312.63 MB available free space, which means that there is 312.63 MB of space allocated to the database but not yet allocated to any page or objects (Please correct me if I’m wrong). I don’t expect the delete operation of yesterday to have released any page/space. So why wasn’t the database able to use this space which was readily available and allocated to the database?

I leave out the possibility that the file has grown any further since yesterday because there was ample disk space already available when the the incident occured. This is a SQL Server 2016 SP1 Express Edition which has the Autogrowth setting having been enabled with file growth size of 64 MB and the maximum size being set to Unlimited. Considering:

  10184 MB + 64 MB = 10248 MB > 10240 MB (= 10 GB = maximum allowed DB size in Express Edition)

It’s obvious the file couldn’t (and can’t) grow any further.

While the database couldn’t resize the file, it still could have used whatever space that was available to it. So why it didn’t happen?

Could it be that, some objects have been dropped after delete was performed yesterday?

postgresql – Would this kind of table structure be wasteful and stupid, or superior to my current way of doing it?

I currently have this table structure:

    CREATE TABLE people
    (
        id              bigserial,
        timestamp       timestamptz DEFAULT now() NOT NULL,
        PRIMARY KEY     (id)
    );

    CREATE TABLE "personal information"
    (
        id                  bigserial,
        "person id"         bigint NOT NULL,
        timestamp           timestamptz DEFAULT now() NOT NULL,
        "data's timestamp"  timestamptz,
        "field"             text NOT NULL,
        "value"             text,
        PRIMARY KEY         (id),
        FOREIGN KEY         ("person id") REFERENCES people (id) ON UPDATE CASCADE ON DELETE CASCADE
    );

I was criticized for this as the “value” column in the “personal information” table now has to hold all kinds of different types, as text, which is not good for optimization/query planning and is also problematic in various ways. (How should a boolean be stored? As textual ‘0’s and ‘1’s? As ‘true’ and ‘false’ strings? Etc.)

Although somebody suggested me to use JSON to store the data, it somehow doesn’t “seem right” to me either. Even though PG now has native support for JSON and “understands” it. But I can’t articulate why exactly it seems wrong to me.

This made me rethink this in my head, and I instead came up with this (rather obvious) alternative:

    CREATE TABLE people
    (
        id              bigserial,
        timestamp       timestamptz DEFAULT now() NOT NULL,
        PRIMARY KEY     (id)
    );

    CREATE TABLE "personal information"
    (
        id                  bigserial,
        "person id"         bigint NOT NULL,
        timestamp           timestamptz DEFAULT now() NOT NULL,
        "data's timestamp"  timestamptz,
        "field 1"           text,
        "field 2"           boolean,
        "field 3"           integer,
        (...)
        "field 99"          numeric,
        PRIMARY KEY         (id),
        FOREIGN KEY         ("person id") REFERENCES people (id) ON UPDATE CASCADE ON DELETE CASCADE
    );

(The column names aren’t literally “field X”. This is just an example.)

Now, I have one column for each of the possible “fields” of personal data, rather than stuffing them all into “field” (for the name) and “value” (for the value, as a text representation). I also don’t “abstract it away” from normal SQL by having it as JSON blobs.

The problems/fears I have with this are:

  1. That it will waste a ton of storage space. Each row of personal information, even if it only has the “first name” and “e-mail address” fields filled in, will now (if I understand things correctly) take up a “wide” area of storage, due to all the null columns with no values. Whereas in my old structure, all actual columns are utilized, meaning less storage waste. Maybe it won’t matter for 10 or 100 or even 1,000 rows, and I have no real grasp of how big the difference is, but eventually it might stack up, and then my “proper” structure falls apart? Or is PG smart enough about this that it internally doesn’t store a “null” symbol for all the fields, but has some way of “skipping over” them in its data structure?
  2. It will perpetually require me to modify the table by adding more columns whenever new “personal information” fields are required. This problem doesn’t exist in my current or the JSON method (which is basically the same thing).

What do you have to say about this? Is it “worth it” to do it this “proper” way rather than stuffing them into the generic field/value columns (whether those generic columns are of JSON or text type)?

oracle – Insert/Update table on Pentaho when the table has a TYPE defined by user

I’m trying to make a remote connection from computer A to computer B where they have the same database but computer B doesn’t have data in the table PERSONA_Y_ESTADOS, the goal is to get 3 rows from the table that is in computer A (which is populated) to table B which is empty. However it’s giving me these errors.

2020/08/02 18:37:10 - Spoon - Running transformation using the Kettle execution engine
2020/08/02 18:37:10 - Spoon - Transformation opened.
2020/08/02 18:37:10 - Spoon - Launching transformation (pupu)...
2020/08/02 18:37:10 - Spoon - Started the transformation execution.
2020/08/02 18:37:10 - Insert / update.0 - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Error in step, asking everyone to stop because of:
2020/08/02 18:37:10 - Insert / update.0 - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException: 
2020/08/02 18:37:10 - Insert / update.0 - Error looking up row in database
2020/08/02 18:37:10 - Insert / update.0 - ORA-00904: "FECHAS_INICIO_FIN.FECHA_FIN": invalid identifier

2020/08/02 18:37:10 - Insert / update.0 - 
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.core.database.Database.getLookup(Database.java:3108)
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.core.database.Database.getLookup(Database.java:3087)
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.core.database.Database.getLookup(Database.java:3083)
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.lookupValues(InsertUpdate.java:89)
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:299)
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2020/08/02 18:37:10 - Insert / update.0 -   at java.lang.Thread.run(Thread.java:748)
2020/08/02 18:37:10 - Insert / update.0 - Caused by: java.sql.SQLSyntaxErrorException: ORA-00904: "FECHAS_INICIO_FIN.FECHA_FIN": invalid identifier

2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:886)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3657)
2020/08/02 18:37:10 - Insert / update.0 -   at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1495)
2020/08/02 18:37:10 - Insert / update.0 -   at org.pentaho.di.core.database.Database.getLookup(Database.java:3093)
2020/08/02 18:37:10 - Insert / update.0 -   ... 6 more
2020/08/02 18:37:10 - pupu - Transformation detected one or more steps with errors.
2020/08/02 18:37:10 - pupu - Transformation is killing the other steps!
2020/08/02 18:37:10 - pupu - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Errors detected!
2020/08/02 18:37:10 - Spoon - The transformation has finished!!
2020/08/02 18:37:10 - pupu - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Errors detected!
2020/08/02 18:37:10 - pupu - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Errors detected!

I assume this is due to the TYPE I have created for that table, however I don’t know how to solve the problem without removing the TYPE (I really don’t want to do that).

Here’s the script for the creation of my table:

CREATE OR REPLACE TYPE Fechas_inicio_fin AS OBJECT(
fecha_inicio date,
fecha_fin date
);
/
CREATE TABLE Persona_y_Estado (
  id number primary key,
  fechas_inicio_fin Fechas_inicio_fin,
  fk_persona number,
  fk_estado number
);
/
insert into Persona_y_estado (fechas_inicio_fin, fk_persona, fk_estado) values (fechas_inicio_fin(TO_DATE('6/20/2020', 'mm/dd/yyyy'), TO_DATE('1/1/2050', 'mm/dd/yyyy')), 1, 1);
insert into Persona_y_estado (fechas_inicio_fin, fk_persona, fk_estado) values (fechas_inicio_fin(TO_DATE('6/20/2020', 'mm/dd/yyyy'), TO_DATE('1/1/2050', 'mm/dd/yyyy')), 2, 1);
insert into Persona_y_estado (fechas_inicio_fin, fk_persona, fk_estado) values (fechas_inicio_fin(TO_DATE('6/20/2020', 'mm/dd/yyyy'), TO_DATE('1/1/2050', 'mm/dd/yyyy')), 3, 1);