magento2.3.5 – Basic Graphql Query throws error

I’m having a problem with the new 2.3.5 version but only on one of our websites.

I’ve been using this query for months and it was working well, but since we updated to 2.3.5, it does not work anymore.

The query :

{
products(
  filter:{
    sku:{
      eq:"test_dev_product"
    }
  }
)
{
  items{
    tier_prices{
      qty,
      value
    }
  }
}

}

It’s a basic one, but it seems that it throws this error with Postman :

  Uncaught Error: Cannot instantiate interface MagentoFrameworkGraphQlQueryErrorHandlerInterface in magentoRoot/vendor/magento/framework/ObjectManager/Factory/Dynamic/Developer.php: 50
Stack trace:
#0 magentoRoot/vendor/magento/framework/ObjectManager/ObjectManager.php(70): MagentoFrameworkObjectManagerFactoryDynamicDeveloper->create('Magento\Framewo...')
#1 magentoRoot/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php(167): MagentoFrameworkObjectManagerObjectManager->get('Magento\Framewo...')
#2 magentoRoot/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php(273): MagentoFrameworkObjectManagerFactoryAbstractFactory->resolveArgument(Array, 'Magento\Framewo...', NULL, 'errorHandler', 'Magento\Framewo...')
#3 magentoRoot/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php(236): MagentoFrameworkObjectManagerFactoryAbstractFactory->getResolvedArgument('Magento\Framewo...', Array, Array)
#4 magentoRoot/vendor/magento in <b>magentoRoot/vendor/magento/framework/ObjectManager/Factory/Dynamic/Developer.php

As it is doing well on 2 websites and not this one, maybe someone found something ? Some says that the SSL is the problem but I can’t find a way to disable it on graphql only.

Thank you for your time,

sql server – Is there anyway I can speed up this large full-table query?

I have a query that selects from only one table and with one WHERE filter. However it takes a very long time to execute and even times out occasionally. This is likely because it is filtering about 4 million rows out from a table of 13 million rows (the other 9 million records are older than 2019), and it is returning all of the columns, of which there are 101 (a mix of datetime, varchar, and int columns). It has two indexes, a clustered one on its primary key interaction_id, and an unclustered index on interaction_date which is a datetime column that is the main filter. This is the query:

SELECT *
FROM (Sales).(dbo).(Interaction)
WHERE
year(Interaction_date) >= 2019

Is there anything obvious I can do to improve this query’s performance by adding/tweaking indexes or tweaking the query itself? Before I go into an ETL processes or fight back on the group that needs this query (they are a hadoop sqooping team who insist they need to sqoop all of these records all the time with all of the columns), I want to see if I can make it easier on people by doing something on my end as the DBA.

The query plan by default ignores my non-clustered index on the interaction_date column and still does a full clustered index scan. So I then tried forcing it to use it by including WITH (INDEX(IX_Interaction_Interaction_Date)) in the select.

This forces it into the query plan startign with an index scan of the non-clustered index, with estimated rows 4 million but estimated rows to be read as all 13 million. Then after a short time it spends the rest of the execution on the key lookup of the primary clustered index.

But ultimately, it doesn’t seem to speed up the query at all.
Any thoughts on how I can handle this? Thanks.

sql server – Improve Select query for performance

We have the following properties.

Db : Microsoft SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64) 
    Oct 20 2015 15:36:27 
    Copyright (c) Microsoft Corporation
    Web Edition (64-bit) on Windows NT 6.3 <X64> (Build 14393: )

Machine Capacity : 32 GB RAM 
                 : Processor Inter(R) Xeon (R) CPU E5530 @ 2.40 GHz 2.40 GHz

We have a table which contains 2 million records ( Total 32 Columns, 1 composite index )
( used in OLTP file write ( Insert ) and reporting ( Select ) )

Index Definition :

CREATE NONCLUSTERED INDEX (NonClusteredIndex-DSP_DTL) ON (dbo).(Dispatch_detail)
    (
        (DSPDTL_FIN_YEAR) ASC,
        (DSPDTL_DIV_ID) ASC,
        (DSPDTL_PROD_ID) ASC,
        (DSPDTL_DT) ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON (PRIMARY)

Right Now, if i fire a ‘ select * from ‘ for the table its takes 2 mins and 45 seconds to fetch the data
The host where the db is running has a good configuration,Even a ‘ select * from ‘ wont take this much time.

How can i increase the performance of the query ?
Please suggest your ideas on this .

c# – Convert linq query to data table

I need to convert my query to data table i’am using this method to convert it but don’t work with me he give me this error ‘Cannot implicitly convert type ‘System.Collections.Generic.IEnumerable’ to ‘System.Collections.Generic.List’. An explicit conversion exists (are you missing a cast?)’
So i need to convert it and after to dataset to use in My Crystal report.

    var list = db.LIST_(53);
        DataSet converted = list;
        return converted;

Unexpected query execution plan on MySQL 8 when using Datetime range

I have a MySQL 8 database with about 900 millions of rows. When I run the following query the response is very slow because table B doesn’t seem to be using the right index:

SELECT  B.name

FROM A JOIN B ON A.key1 = B.key1 AND A.key2 = B.key2
WHERE
    A.active = True
    AND B.datetime  > '2020-05-28 00:00:00' AND B.datetime  < '2020-05-29 00:00:00'
    AND A.type = 1

The query has the following execution plan (EXPLAIN FORMAT=JSON):

{
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "10202799.44"
    },
    "nested_loop": (
      {
        "table": {
          "table_name": "A",
          "access_type": "ref",
          "possible_keys": (
            "ix_A_type_key1",
          ),
          "key": "ix_A_type_active",
          "used_key_parts": (
            "type",
            "active"
          ),
          "key_length": "5",
          "ref": (
            "const",
            "const"
          ),
          "rows_examined_per_scan": 62738,
          "rows_produced_per_join": 62738,
          "filtered": "100.00",
          "cost_info": {
            "read_cost": "62738.00",
            "eval_cost": "6273.80",
            "prefix_cost": "69011.80",
            "data_read_per_join": "1G"
          },
          "used_columns": (
            "active",
            "type",
            "key1",
            "key2"
          )
        }
      },
      {
        "table": {
          "table_name": "B",
          "access_type": "ref",
          "possible_keys": (
            "ix_B_key1_datetime",
            "ix_B_key2_datetime"
          ),
          "key": "ix_B_key2_datetime",
          "used_key_parts": (
            "key2"
          ),
          "key_length": "258",
          "ref": (
            "A.key2"
          ),
          "rows_examined_per_scan": 147,
          "rows_produced_per_join": 3136,
          "filtered": "0.03",
          "cost_info": {
            "read_cost": "9211170.19",
            "eval_cost": "313.69",
            "prefix_cost": "10202799.44",
            "data_read_per_join": "96M"
          },
          "used_columns": (
            "key1",
            "key2",
            "datetime",
            "name",
          ),
          "attached_condition": "((`B`.`key1` = `A`.`key1`) and (`B`.`datetime` > TIMESTAMP'2020-05-28 00:00:00') and (`B`.`datetime` < TIMESTAMP'2020-05-29 00:00:00'))"
        }
      }
    )
  }
}

Table B In the used_key_parts it’s only using the column key2 instead of the full composite index key2 + datetime (ix_B_key2_datetime). Why is not using the datetime part of the composite index? Is there a way to speed up this query?

user interface – Tool/media/platform to host query tool backed with Python

In making a simple query form, I am looking for a way to actualize this idea.

enter image description here

Basically,

  1. it let users to input some strings.
  2. by clicking the button, a Python script to examine if the user’s input matches any keys in a preset (Python) dictionary.
  3. if the input has a match, it returns the value. Otherwise an error message.

This tool will be used and circulated internally in a company.

Previously I want to make it in Excel using VBA to activate Python script. However online version of Excel cannot run VBA, and at the user end, there’s no Python installed.

I don’t know what tool and platform can help me to actualize this.

So I’m looking for other tool/media/platform, for example Microsoft Office family products can help me actualize this.

Can anybody please shed some light? Thank you.

mysql – SQL Select query is taking so long time on new Server

I am migrating our live website server from rackspace to AWS. We have already done all parts but stuck at below point:

One of query which is running within a second on live server, it’s taking around 5 mins to execute in aws server.

I have already checked about configuration of mysql and its same. We keep same configuration as live for mysql, Tried to import database thrice but same problem persist:

Here is mysql Versions of both server:
Rackspace – 5.5.61
AWS – 5.7.29

Here is that query:

“SELECT DISTINCT(listname), tbcc_l_lists.id, tbcc_l_lists.reportcode, tbcc_l_lists.list_type, tbcc_l_lists.list_url, tbcc_l_lists.researchby, tbcc_l_lists.linkassigndate, tbcc_l_lists.startdate, tbcc_l_lists.completeddate, tbcc_l_lists.team, tbcc_l_lists.status, tbcc_l_lists.priority, tbcc_l_lists.list_url_path, tbcc_l_lists.comments, tbcc_l_lists.added_by, tbcc_l_categorygroup.name, tbcc_l_listtype.listtype_name, (SELECT COUNT(tbcc_l_contacts.primary_list) FROM tbcc_l_contacts WHERE tbcc_l_lists.id=tbcc_l_contacts.primary_list ) AS contactcount, (SELECT COUNT(tbcc_l_contacts_rejects.primary_list) FROM tbcc_l_contacts_rejects WHERE tbcc_l_lists.id=tbcc_l_contacts_rejects.primary_list) AS rejectcount, tbcc_l_admins.username, tbcc_l_admins.fname, tbcc_l_admins.lname, admins.username AS team_username, admins.fname AS team_fname, admins.lname AS team_lname, admins1.username AS addedby_username, admins1.fname AS addedby_fname, admins1.lname AS addedby_lname FROM tbcc_l_lists LEFT JOIN tbcc_l_categorygroup ON tbcc_l_lists.categorygroup = tbcc_l_categorygroup.id LEFT JOIN tbcc_l_listtype ON tbcc_l_lists.list_type = tbcc_l_listtype.listtype_id LEFT JOIN tbcc_l_contacts ON tbcc_l_lists.id = tbcc_l_contacts.primary_list LEFT JOIN tbcc_l_contacts_rejects ON tbcc_l_lists.id = tbcc_l_contacts_rejects.primary_list LEFT JOIN tbcc_l_admins ON tbcc_l_admins.id = tbcc_l_lists.researchby LEFT JOIN tbcc_l_admins AS admins ON admins.id = tbcc_l_lists.team LEFT JOIN tbcc_l_admins AS admins1 ON admins1.id = tbcc_l_lists.added_by WHERE (1=1) LIMIT 200000;”

I am working on this issue since last couple of days, Any help will be highly appreciable.

mysql – SQL query to return result with different row

I am working with a database that has a table called ‘order’ that looks like the following.

id    day    item  
-------------------
11     1     apple    
11     3     banana  
11     4     berry  
22     1     coke  
22     3     pepsi  
33     2     chips  
33     4     salsa  

I want mysql query output to look like the following:

id    1       2       3       4
------------------------------------
11    apple   null    banana   berry  
22    coke    null    pepsi    null  
33    null    chips   null     salsa  

Is it possible to get such an output using mysql?

query – How to reproduce SQL Injection problem by sending single quote in MySQL?

This is Damn Vulnerable Web Application (DVWA) and it’s vulnerable to SQL injection (SQLi).

Let’s begin by sending normal request

http://127.0.0.1/dvwa/vulnerabilities/sqli/?id=1&Submit=Submit#

Output via browser

ID: 1
First name: admin
Surname: admin

This is how the request looks like in MySQL

mysql> SELECT first_name, last_name FROM users WHERE user_id = '1';
+------------+-----------+
| first_name | last_name |
+------------+-----------+
| admin      | admin     |
+------------+-----------+
1 row in set (0.00 sec)

mysql> 

Common way to identify SQL injection is by sending single quote ' char in the parameter.

E.g. id='

Give it a try on the url and it works.

http://127.0.0.1/dvwa/vulnerabilities/sqli/?id='&Submit=Submit#

Web browser will display SQL error indicates that the site is vulnerable to SQLi

You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''''' at line 1

I didn’t know how the query looks like in MySQL ..

So I’ve tried SELECT first_name, last_name FROM users WHERE user_id = '''; but I didn’t get the same error.

Instead, I was getting '> symbol from MySQL shell.

mysql> SELECT first_name, last_name FROM users WHERE user_id = ''';
    '> 
    '> 
    '> '
    -> 
    -> ;
Empty set (0.00 sec)

mysql> 

What is the right way to query id=' or user_id = ' (single quote) request in MySQL?

group by in query google sheet select different columns

Want to get the results described on the screenshot. How can i achieve it. Thank you.

enter image description here

QUERY:

=QUERY(A2:F12,"SELECT A, MAX(B), MIN(B) group by A label A 'G V1', MAX(B) 'LAST V2', MIN(B) 'FIRST V2'")

Want to get First & last V5 value in terms of max and min dates in V2 (see K2, L2 desired results), I could get the last and first dates now i want their V5 corresponding.