postgresql – Queries on a large database terminate the connection to the server and work with LIMIT

I am trying to run queries against a large database without disconnecting from the server.

I use Postgres 12.1 on a Mac with 16 GB of storage and about 40 GB of free hard drive. The database is 78gb pg_database_size with the largest table is 20 GB after do pg_total_relation_size,

The error I get (from the log), regardless of which non-working query I'm running, is:

server process (PID xxx) was terminated by signal 9: Killed: 9

The error is in the VS code "lost connection to server",

Two examples that don't work are:

UPDATE table
SET column = NULL
WHERE column = 0;
select columnA
from table1
where columnA NOT IN (
select columnB
from table2
);

I can run some of the queries (for example the one above) by adding a LIMIT for example 1,000,000.

I suspected that I was running out of hard drive due to temporary files, but in the log (with log_temp_files = 0), I can't see any temporary files being written.

I tried to gain and lose weight work_mem. maintenance_work_mem. shared_buffers, and temp_buffers, Nobody worked, the performance was about the same.

I tried to drop all the indexes, which reduced the "cost" for some of the queries, but the connection to the server was lost anyway.

What could be my problem and how can I fix it further?

I also read that temporary files from expired queries are stored in pqsql_tmp. I checked the folder and it doesn't contain any files of significant size. Could the temporary files be saved elsewhere?


The log file to run a failed query looks like this:

2020-02-17 09:31:08.626 CET (94908) LOG:  server process (PID xxx) was terminated by signal 9: Killed: 9
2020-02-17 09:31:08.626 CET (94908) DETAIL:  Failed process was running: update table
        set columnname = NULL
        where columnname = 0;

2020-02-17 09:31:08.626 CET (94908) LOG:  terminating any other active server processes
2020-02-17 09:31:08.626 CET (94919) WARNING:  terminating connection because of crash of another server process
2020-02-17 09:31:08.626 CET (94919) DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exi$
2020-02-17 09:31:08.626 CET (94919) HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 09:31:08.626 CET (94914) WARNING:  terminating connection because of crash of another server process
2020-02-17 09:31:08.626 CET (94914) DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exi$
2020-02-17 09:31:08.626 CET (94914) HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 09:31:08.629 CET (94908) LOG:  all server processes terminated; reinitializing
2020-02-17 09:31:08.698 CET (94927) LOG:  database system was interrupted; last known up at 2020-02-17 09:30:57 CET
2020-02-17 09:31:08.901 CET (94927) LOG:  database system was not properly shut down; automatic recovery in progress
2020-02-17 09:31:08.906 CET (94927) LOG:  invalid record length at 17/894C438: wanted 24, got 0
2020-02-17 09:31:08.906 CET (94927) LOG:  redo is not required

java – Analyze a diagram to convert it into adjacency lists and CSR, and then perform connection queries using bidirectional BFS

I was assigned to a university project in which I had to analyze directed graphics files from SNAP and then convert them to CSR (Compressed Sparse Row) format. Then the client must be able to perform connection queries between any two vertices using bidirectional BFS.

Here is my implementation:

Main.java

public class Main
{
    public static void main(String() args)
    {
        InputInterface input = null;
        switch (args.length)
        {
            case 1:
                input = new KeyboardHandler();
                break;
            case 3:
                if(args(1).equalsIgnoreCase("-f"))
                    input = new QueryFileHandler(args(2));
                else
                {
                    System.out.println("Usage: program  (-f  ) ");
                    System.exit(1);
                }
                break;
            default:
                System.out.println("Usage: program  (-f  ) ");
                System.exit(1);
                break;

        }

        try
        {
            AdjacencyListGraph s = new AdjacencyListGraph(args(0));
            CSRGraph normalCSR = new CSRGraph(s,GraphType.NORMAL);
            CSRGraph invertedCSR = new CSRGraph(s,GraphType.INVERTED);

            BidirectionalBFS bfs = new BidirectionalBFS(normalCSR,invertedCSR);

            input.processQueries(bfs);
        }
        catch(AdjacencyListGraphNotCompletedException ex)
        {
            System.out.println(ex.getMessage() + "nTerminating application...");
            System.exit(2);
        }
        catch(CSRGraphNotCompletedException ex)
        {
            System.out.println(ex.getMessage() + "nTerminating application...");
            System.exit(3);
        }

    }
}

AdjacencyListGraph.java

import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.time.Duration;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.regex.Pattern;

public class AdjacencyListGraph
{
    private final HashMap> normalGraph = new HashMap<>();
    private final HashMap> invertedGraph = new HashMap<>();
    private int totalEdges;

    //Constructor
    public AdjacencyListGraph(String path) throws AdjacencyListGraphNotCompletedException
    {
        System.out.println("Parsing file " + path);
        long tStart = System.nanoTime();
        try
        {
            File file = new File(path);
            FileInputStream fis = new FileInputStream(file);
            InputStreamReader isr = new InputStreamReader(fis, StandardCharsets.US_ASCII);
            BufferedReader br = new BufferedReader(isr);

            Pattern pattern = Pattern.compile("\s");

            int lineCounter = 0;
            String line;
            while ((line = br.readLine()) != null)
            {
                lineCounter++;

                if (line.charAt(0) != '#') //Discarding any comments
                {
                    try
                    {
                        //Tokenize the line
                        final String() tokens = pattern.split(line);
                        final long source = Long.parseLong(tokens(0));
                        final long target = Long.parseLong(tokens(1));

                        //Add the edge to the 2 Maps
                        addTargetToSource(normalGraph,source,target);
                        addTargetToSource(invertedGraph,target,source);

                        this.totalEdges++;
                    }
                    catch(NumberFormatException ex)
                    {
                        System.out.println("Error at line " + lineCounter + " : Could not parse. String was: " + line + ". Skipping line ");
                    }
                }
            }
            br.close();
            long tEnd = System.nanoTime();
            System.out.println("Loaded " + totalEdges + " edges in " + Duration.ofNanos(tEnd - tStart).toMillis() + "ms");
        }
        catch (FileNotFoundException ex)
        {
            throw new AdjacencyListGraphNotCompletedException("Invalid file path given.");
        }
        catch (IOException ex)
        {
            throw new AdjacencyListGraphNotCompletedException("I/0 error occurred.");
        }
    }

    //Getters
    public HashMap> getNormalGraph()
    {
        return this.normalGraph;
    }
    public HashMap> getInvertedGraph()
    {
        return invertedGraph;
    }
    public int getTotalEdges()
    {
        return this.totalEdges;
    }

    private void addTargetToSource(HashMap> map, Long source, Long target)
    {
        map.putIfAbsent(source, new LinkedList<>());
        map.get(source).add(target);
    }
}   

AdjacencyListGraphNotCompletedException.java

public class AdjacencyListGraphNotCompletedException extends Exception
{
    public AdjacencyListGraphNotCompletedException(String message)
    {
        super(message);
    }
}

CSRGraph.java

import java.time.Duration;
import java.util.Arrays;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;

enum GraphType
{
    NORMAL("Normal"),
    INVERTED("Inverted");

    final String name;

    GraphType(String name)
    {
        this.name = name;
    }
}

public class CSRGraph
{
    private HashMap idMap ;
    private int() IA;
    private long() JA;

    public CSRGraph(AdjacencyListGraph graph, GraphType type) throws CSRGraphNotCompletedException
    {
        long tStart = System.nanoTime();

        HashMap> graphMap;

        switch (type)
        {
            case NORMAL:
                graphMap = graph.getNormalGraph();
                break;
            case INVERTED:
                graphMap = graph.getInvertedGraph();
                break;
            default:
                throw new CSRGraphNotCompletedException("Implementation error.");
        }

        if(graphMap.isEmpty())
            throw new CSRGraphNotCompletedException("Error in the creation of CSR: No edges exist.");

        this.idMap = new HashMap<>(graphMap.size());

        this.IA = new int(graphMap.size()+1);
        this.IA(0) = 0;

        this.JA = new long(graph.getTotalEdges());

        int IA_Index = 0;
        int JA_Index = 0;
        //Iterate through every HashMap entry
        for(Map.Entry> entry : graphMap.entrySet())
        {
            long key = entry.getKey();
            LinkedList values = entry.getValue();

            for (long target : values)
                this.JA(JA_Index++) = target;

            this.idMap.put(key, IA_Index);
            this.IA(IA_Index + 1) = IA(IA_Index) + values.size();

            IA_Index++;
        }

        long tEnd = System.nanoTime();
        System.out.println(type.name + " graph conversion to CSR took " + Duration.ofNanos(tEnd - tStart).toMillis() + "ms");
    }

    public LinkedList getNeighbors(long id)
    {
        LinkedList children = new LinkedList<>();

        if(!idMap.containsKey(id))
            return children;

        int id_Index = idMap.get(id);

        long () children_arr = Arrays.copyOfRange(JA,IA(id_Index),IA(id_Index+1));

        for(long target : children_arr)
            children.add(target);

        return children;
    }

    public boolean vertexExists(long id)
    {
        return idMap.containsKey(id);
    }
}

CSRGraphNotCompletedException.java

public class CSRGraphNotCompletedException extends Exception
{
    public CSRGraphNotCompletedException(String message)
    {
        super(message);
    }
}

BidirectionalBFS.java

import java.time.Duration;
import java.util.HashMap;
import java.util.LinkedList;

public class BidirectionalBFS
{
    private final CSRGraph normalGraph;
    private final CSRGraph invertedGraph;

    public BidirectionalBFS(CSRGraph normalGraph, CSRGraph invertedGraph)
    {
        this.normalGraph = normalGraph;
        this.invertedGraph = invertedGraph;
    }

    public BFSResult connectionQuery(VertexPair query)
    {
        long tStart = System.nanoTime();

        long source = query.getSourceNode();
        long target = query.getTargetNode();


        /*We then check if the source and target nodes even exist in the graph.
          We earn a lot of time if we skip doomed BFS queries.
         */
        boolean source_exists = normalGraph.vertexExists(source);
        boolean target_exists = invertedGraph.vertexExists(target);

        if(!source_exists || !target_exists)
            return new BFSResult(source,source_exists,target,target_exists,false,null,Duration.ofNanos(System.nanoTime() - tStart));
        else if(source == target)
            return new BFSResult(source, true, target, true,true, 0, Duration.ofNanos(System.nanoTime() - tStart));


        LinkedList queueNormal = new LinkedList<>();
        LinkedList queueInverted = new LinkedList<>();

        HashMap nodeInfoNormal = new HashMap<>();
        HashMap nodeInfoInverted = new HashMap<>();

        nodeInfoNormal.put(source,0);
        nodeInfoInverted.put(target,0);

        queueNormal.add(source);
        queueInverted.add(target);

        while (!queueNormal.isEmpty() || !queueInverted.isEmpty())
        {
            Intersection intersection;
            if ((intersection = graphBFS(normalGraph,queueNormal, nodeInfoNormal, nodeInfoInverted)).intersectionExists() ||
                (intersection = graphBFS(invertedGraph,queueInverted, nodeInfoInverted, nodeInfoNormal)).intersectionExists())
            {
               int normalDistance = nodeInfoNormal.get(intersection.getIntersectNode());
               int invertedDistance = nodeInfoInverted.get(intersection.getIntersectNode());

               int totalDistance = normalDistance + invertedDistance;

               return new BFSResult(source, true, target, true, true, totalDistance, Duration.ofNanos(System.nanoTime() - tStart));
            }
        }
        return new BFSResult(source,true,target,true,false,null,Duration.ofNanos(System.nanoTime() - tStart));
    }

    private Intersection graphBFS(CSRGraph graph,
                                  LinkedList queue,
                                  HashMap nodeInfoThisGraph,
                                  HashMap nodeInfoOtherGraph)
    {
        if (!queue.isEmpty())
        {
            long current_node = queue.remove();

            LinkedList adjacentNodes = graph.getNeighbors(current_node);

            while (!adjacentNodes.isEmpty())
            {
                long adjacent = adjacentNodes.poll();

                if (nodeInfoOtherGraph.containsKey(adjacent))
                {
                    nodeInfoThisGraph.put(adjacent,nodeInfoThisGraph.get(current_node)+1);
                    return new Intersection(true,adjacent);
                }
                else if(!nodeInfoThisGraph.containsKey(adjacent))
                {
                    nodeInfoThisGraph.put(adjacent,nodeInfoThisGraph.get(current_node)+1);
                    queue.add(adjacent);
                }
            }
        }
        return new Intersection(false,null);
    }

    public void areConnected(VertexPair query)
    {
        BFSResult res = connectionQuery(query);
        System.out.println(res.toString());
    }
}

final class BFSResult {
    private final long source_id;
    private final boolean source_exists;
    private final long target_id;
    private final boolean target_exists;
    private final boolean areConnected;
    private final Integer distance;
    private final Duration timeElapsed;

    public BFSResult(long source_id, boolean source_exists, long target_id, boolean target_exists, boolean areConnected, Integer distance, Duration timeElapsed) {
        this.source_id = source_id;
        this.source_exists = source_exists;
        this.target_id = target_id;
        this.target_exists = target_exists;
        this.areConnected = areConnected;
        this.distance = distance;
        this.timeElapsed = timeElapsed;
    }

    public long getSource_id()
    {
        return source_id;
    }
    public boolean isSource_exists()
    {
        return source_exists;
    }
    public long getTarget_id()
    {
        return target_id;
    }
    public boolean isTarget_exists()
    {
        return target_exists;
    }
    public boolean isAreConnected()
    {
        return areConnected;
    }
    public Integer getDistance()
    {
        return distance;
    }
    public Duration getTimeElapsed()
    {
        return timeElapsed;
    }

    @Override
    public String toString() {
        return "BFSResult{" +
                "source_id=" + source_id +
                ", source_exists=" + source_exists +
                ", target_id=" + target_id +
                ", target_exists=" + target_exists +
                ", areConnected=" + areConnected +
                ", distance=" + distance +
                ", timeElapsed=" + timeElapsed.toMillis() + "ms" +
                '}';
    }
}


final class Intersection
{
    private final boolean intersectionExists;
    private final Long intersectNode;

    public Intersection(boolean intersectionExists,Long intersectNode)
    {
        this.intersectionExists = intersectionExists;
        this.intersectNode = intersectNode;
    }

    public boolean intersectionExists()
    {
        return intersectionExists;
    }

    public Long getIntersectNode()
    {
        return intersectNode;
    }
}

InputInterface.java

public interface InputInterface
{
    void processQueries(BidirectionalBFS bfs);
}

KeyboardHandler.java

import java.util.Scanner;

public class KeyboardHandler implements InputInterface
{
    private Scanner scan = new Scanner(System.in);

    public void processQueries(BidirectionalBFS bfs)
    {
        System.out.println("Using keyboard input.");
        while(true)
        {
            System.out.println("Give a node pair (source,target)");
            VertexPair query = new VertexPair(longPositiveZero(),longPositiveZero());
            bfs.areConnected(query);

            System.out.println("Would you like an another query? (Yes/No)");
            String answer = stringEqualsIgnoreCase(new String(){"Yes","No"});

            if(answer.equalsIgnoreCase("No"))
            {
                System.out.println("Thank you for using our software!");
                System.exit(0);
            }

        }
    }

    public long longPositiveZero()
    {
        long value;

        while(true)
        {
            while (!scan.hasNextLong())
            {
                System.out.println("Expected an Integer. Please type again.");
                scan.next();
            }
            value = scan.nextLong();
            if(value >= 0)
                return value;
            else
                System.out.println("Field cannot be negative. Please type again.");
        }

    }

    public String stringEqualsIgnoreCase(String() args)
    {
        while(true)
        {
            String value = scan.next();
            for(String i : args)
            {
                if(value.equalsIgnoreCase(i))
                    return i;
            }
            System.out.println("Invalid value. Please type again.");
        }
    }
}

QueryFileHandler.java

import java.io.*;
import java.nio.charset.StandardCharsets;
import java.time.Duration;
import java.util.LinkedList;
import java.util.regex.Pattern;

public class QueryFileHandler implements InputInterface
{
    private String path;

    public QueryFileHandler(String path)
    {
        this.path = path;
    }

    public void processQueries(BidirectionalBFS bfs)
    {
        System.out.println("Using file input mode.");

        LinkedList queries = null;
        try
        {
            queries = parseQueries();
        }
        catch (QueryFileHandlerException ex)
        {
            System.out.println(ex.getMessage() + "nTerminating application...");
            System.exit(3);
        }

        while(queries.size() != 0)
        {
            VertexPair query = queries.poll();
            bfs.areConnected(query);
        }

        System.out.println("Thank you for using our software!");
    }

    private LinkedList parseQueries() throws QueryFileHandlerException
    {
        long totalQueries = 0;
        LinkedList queries = new LinkedList<>();

        System.out.println("Parsing query file " + path);
        long tStart = System.nanoTime();
        try
        {
            File file = new File(path);
            FileInputStream fis = new FileInputStream(file);
            InputStreamReader isr = new InputStreamReader(fis, StandardCharsets.US_ASCII);
            BufferedReader br = new BufferedReader(isr);

            Pattern pattern = Pattern.compile("\s");

            int lineCounter = 0;
            String line;
            while ((line = br.readLine()) != null)
            {
                lineCounter++;

                if (line.charAt(0) != '#') //Discarding any comments
                {
                    try
                    {
                        //Tokenize the line
                        final String() tokens = pattern.split(line);
                        final long source = Long.parseLong(tokens(0));
                        final long target = Long.parseLong(tokens(1));

                        queries.add(new VertexPair(source,target));
                        totalQueries++;

                    }
                    catch(NumberFormatException ex)
                    {
                        System.out.println("Error at line " + lineCounter + " : Could not parse. String was: " + line + ". Skipping line ");
                    }
                }
            }
            br.close();
            long tEnd = System.nanoTime();
            System.out.println("Loaded " + totalQueries + " queries in " + Duration.ofNanos(tEnd - tStart).toMillis() + "ms");

            if(totalQueries == 0)
                throw new QueryFileHandlerException("File does not contain any pair of nodes.");

            return queries;
        }
        catch (FileNotFoundException ex)
        {
            throw new QueryFileHandlerException("Invalid file path given.");
        }
        catch (IOException ex)
        {
            throw new QueryFileHandlerException("I/0 error occurred.");
        }
    }

QueryFileHandlerException.java

public class QueryFileHandlerException extends Exception
{
    public QueryFileHandlerException(String message)
    {
        super(message);
    }
}

VertexPair.java

public class VertexPair
{
    private long sourceNode;
    private long targetNode;

    public VertexPair(long sourceNode, long targetNode)
    {
        this.sourceNode = sourceNode;
        this.targetNode = targetNode;
    }

    public long getSourceNode()
    {
        return sourceNode;
    }

    public long getTargetNode()
    {
        return targetNode;
    }
}

What is your opinion on this implementation? Please suggest possible improvements.

Architecture – Queries as a service for other applications (neo4j)

I am new to neo4j and the diagram I am designing is used by third party applications for some fixed (cipher) queries – we can think of the classic "Who is a friend of Alice?" Think. Question.

I want these queries to be easily requested by the other applications – which can be developed in different languages ​​- without having to re-implement each query or inserting too many layers between the client application and the neo4j engine (that would work for me slow responses).

Of course, these questions are answered completely by Cypher queries or only resources from the neo4j database itself are used anyway.

So the diagram should serve as a service for other applications: how can such functionality be best provided? I aim for two goals: a) centralize query content; b) minimize the overhead of the layers.

A few ideas came to my mind:

  1. Add a service layer in front of the graphics database that is written in one language and can send requests to clients
    • The path would be client (i.e. Node.js) -> (rest maybe?) -> Service layer (Python?) -> Cypher (via Bolt) -> neo4j
  2. neo4j UDF
    • also client (i.e. Node.js) -> function via Bolt -> neo4j
    • I did not understand whether these would work similarly to native functions
  3. client (i.e. Node.js) -> Cypher over Bol -> neo4j
    • Of course, this would duplicate the query logic in each client and potentially generate errors
  4. other options that I haven't thought about

If this were an SQL database, I would have written stored functions for everything, and clients could just do it SELECT fx(data) – but since this is neo4j, I would like to hear some advice.

Solution 2 seems to be the best for me – I don't care whether it gives customers the opportunity to ask other questions. Maybe I can block them with permissions.

If another graphics database offers better support for my requirements today, I will of course also evaluate that.

Incorrect results in queries with join for some versions of MariaDB

I have found that some versions of the MariaDB server have some simple queries for certain data that give incorrect results.

I have 2 tables:

CREATE TABLE `tab_parent` (
  `id` bigint(12) unsigned NOT NULL AUTO_INCREMENT,
  `f_date` date DEFAULT NULL,
  `f_value` int(11) NOT NULL DEFAULT 0,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB;
CREATE TABLE `tab_child` (
  `id` bigint(12) unsigned NOT NULL AUTO_INCREMENT,
  `parent_id` bigint(12) unsigned DEFAULT NULL,
  PRIMARY KEY (`id`),KEY `parent_id` (`parent_id`),
  CONSTRAINT `fk_2` FOREIGN KEY (`parent_id`) REFERENCES `tab_parent` (`id`) ON DELETE SET NULL
) ENGINE=InnoDB;

And a view that counts how many children have a parent:

CREATE VIEW `tab_parent_view` AS 
  SELECT `tab_parent`.*, 
  COUNT(`tab_child`.`id`) AS `child_count` 
  FROM (`tab_parent` LEFT JOIN `tab_child` ON(`tab_child`.`parent_id` = `tab_parent`.`id`))
GROUP BY `tab_parent`.`id`;

For example data like here: https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=0a89cb2a4bec635766313c3c59cd923c, the two queries do not return the same number of records for the versions 10.2.27 and 10.4.8 of MariaDB. However, the results are correct for versions 10.1.43, 10.3.15, 10.3.16 and 10.4.12.

SELECT * FROM tab_parent_view WHERE f_date>='2020-02-07' AND f_date<'2020-02-08';
SELECT * FROM tab_parent      WHERE f_date>='2020-02-07' AND f_date<'2020-02-08'; 

Only 2 records are returned for the view, but all 4 available for the table:

query results

Is this a known bug in MariaDB? I would like to know what the reason is and on which versions it is fixed. I have not found this problem for MySQL.

Is it bad practice to use RAW SQL when Laravel Eloquent offers alternatives to build queries?

I am new to using PHP frameworks. I decided to try Laravel. In my project, I had to write a search function that would search some entities using a few keywords, then create a UNION and return the results. So the SQL looks something like this:

SELECT pages.id,pages.updated_at,pages.created_at,page_translations.title,page_translations.description
FROM pages
INNER JOIN page_translations ON page_translations.page_id = pages.id
WHERE pages.deleted_at IS NULL
AND page_translations.deleted_at IS NULL
AND pages.published = 1
AND page_translations.locale = @lang
AND page_translations.active = 1
AND ( page_translations.title LIKE '%@keyword%'
    OR page_translations.description LIKE '%@keyword%'
    OR pages.id IN (
        SELECT blockable_id
        FROM blocks
        WHERE blockable_type = 'App\\Models\\Pages'
        AND content LIKE '%@keyword%'
    )
)
UNION
SELECT articles.id,articles.updated_at,articles.created_at,article_translations.title,article_translations.description
FROM articles
INNER JOIN article_translations ON article_translations.article_id = articles.id
WHERE articles.deleted_at IS NULL
AND article_translations.deleted_at IS NULL
AND articles.published = 1
AND article_translations.locale = @lang
AND article_translations.active = 1
AND ( article_translations.title LIKE '%@keyword%'
    OR article_translations.description LIKE '%@keyword%'
    OR articles.id IN (
        SELECT blockable_id
        FROM blocks
        WHERE blockable_type = 'App\\Models\\Articles'
        AND content LIKE '%@keyword%'
    )
)
ORDER BY updated_at DESC

I translated this query to use Laravel's eloquent approach, and it looked something like this:

$pages = DB::table('pages')
  ->select(explode(',','pages.id,pages.updated_at,pages.created_at,page_translations.title,page_translations.description'))
  ->selectSub(function($query){
    $query->selectRaw("'pages'");
  },'content_type')
  ->join('page_translations','page_translations.page_id','=','pages.id')
  ->whereNull('pages.deleted_at')
  ->whereNull('page_translations.deleted_at')
  ->where((
    ('pages.published','=',1),
    ('page_translations.locale','=',$lang),
    ('page_translations.active','=',1),
  ))
  ->where(function($query) use ($keywords) {
    $query->where('page_translations.title','LIKE','%'.$keywords.'%')
      ->orWhere('page_translations.description','LIKE','%'.$keywords.'%')
      ->orWhereIn('pages.id',function($subquery) use ($keywords) {
        $subquery->select('blockable_id')
          ->from('blocks')
          ->where('blockable_type','=','App\\Models\\Page')
          ->where(function($blockquery) use ($keywords) {
            $blockquery->where('content','LIKE','%'.$keywords.'%');
          });
      });
  });
$articles = DB::table('articles')
  ->select(explode(',','articles.id,articles.updated_at,articles.created_at,article_translations.title,article_translations.description'))
  ->selectSub(function($query){
    $query->selectRaw("'articles'");
  },'content_type')
  ->join('article_translations','article_translations.article_id','=','articles.id')
  ->whereNull('articles.deleted_at')
  ->whereNull('article_translations.deleted_at')
  ->where((
    ('articles.published','=',1),
    ('article_translations.locale','=',$lang),
    ('article_translations.active','=',1),
  ))
  ->where(function($query) use ($keywords) {
    $query->where('article_translations.title','LIKE','%'.$keywords.'%')
      ->orWhere('article_translations.description','LIKE','%'.$keywords.'%')
      ->orWhereIn('articles.id',function($subquery) use ($keywords) {
        $subquery->select('blockable_id')
          ->from('blocks')
          ->where('blockable_type','=','App\\Models\\Article')
          ->where(function($blockquery) use ($keywords) {
            $blockquery->where('content','LIKE','%'.$keywords.'%');
          });
      });
  })
  ->union($pages)
  ->orderBy('content_type','desc')
  ->orderBy('updated_at','desc')
  ->get();

For me, the raw SQL approach is much more readable. And if my query still had a few subqueries, the SQL approach would still be easy to read for me.

So my question is whether when developing with Laravel in a small team environment (2 other backend developers) it is recommended never to write Raw SQL. Is it the Laravel convention to always use Eloquent Query Builder methods unless there are exceptional circumstances such as errors with Eloquent, performance issues, etc.?

oauth2 – Can we prevent the transmission of DDoS or spam queries through an authentication / authorization mechanism?

There is a public query form in my web app and the architecture team is asking for authentication with OAuth2, but I don't think this will be helpful.

as follows: https://docs.microsoft.com/de-de/azure/api-management/api-management-wie-das-backend-mit-aad- protect

I think the best way to protect yourself is through reCaptha

Can we prevent the transmission of DDoS or spam queries through an authentication / authorization mechanism?

Improve the efficiency of MySQL queries on the first line of a group

I wrote the following query in MySQL to get the top 10 landing pages for all browser sessions.

When reading other similar posts about accessing the first line in a group, the solution seemed to be:

SELECT MIN(`created_at`) AS `created_at`, `session_token`, `url`
FROM `session`
GROUP BY `session_token`;

This led to incorrect results and I found this while using it MIN() To get the first record in a group, it was only applied to the specified column and other columns from other rows in the group could be selected.

I've changed the query to the following, which gives the correct result:

SELECT `b`.`created_at`, `b`.`session_token`, `b`.`url` 
FROM (
    SELECT MIN(`created_at`) AS `created_at`, `session_token`, `url` 
    FROM `session` 
    GROUP BY `session_token`
) a
INNER JOIN `session` b USING (`session_token`, `created_at`);

I have created the solution below that gives the correct results. However, it now uses two subqueries.

SELECT `c`.`url`, COUNT(*) AS `hits` 
FROM (
    SELECT `b`.`created_at`, `b`.`session_token`, `b`.`url` 
    FROM (
        SELECT MIN(`created_at`) AS `created_at`, `session_token`, `url` 
        FROM `session` 
        GROUP BY `session_token`
    ) `a`
    INNER JOIN `session` `b` USING (`session_token`, `created_at`)
) AS `c`
GROUP BY `c`.`url`
ORDER BY `hits` DESC
LIMIT 10;

I only tested it on a small set of data and it doesn't seem to be particularly fast. Could it be improved to increase efficiency?

postgresql – Deadlock debugging with node: log responsible queries

I have a system with many node scripts that automatically read and / or write to a Postgres database all the time. One of these scripts is blocked randomly. I would like to debug this, but the problem is that I don't know which other query is causing the deadlock. (I use pg)

So my question is:

If I catch such a mistake

{ error: deadlock detected
at Connection.parseE (/data/jenkins/workspace/03-10-Lotti-watcher-lotti-mod/import/node_modules/pg/lib/connection.js:604:13)
at Connection.parseMessage (/data/jenkins/workspace/03-10-Lotti-watcher-lotti-mod/import/node_modules/pg/lib/connection.js:403:19)
at Socket. (/data/jenkins/workspace/03-10-Lotti-watcher-lotti-mod/import/node_modules/pg/lib/connection.js:123:22)
at Socket.emit (events.js:197:13)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:224:10)
at TCP.onStreamRead (as onread) (internal/stream_base_commons.js:150:17)
  name: 'error',
  length: 336,
  severity: 'ERROR',
  code: '40P01',
  detail:
   'Process 2376 waits for ShareLock on transaction 55837412; blocked by process 22585.nProcess 22585 waits for ShareLock on transaction 55837411; blocked by process 2376.',
  hint: 'See server log for query details.',
  position: undefined,
  internalPosition: undefined,
  internalQuery: undefined,
  where: 'while locking tuple (226684,50) in relation "lotti"',
  schema: undefined,
  table: undefined,
  column: undefined,
  dataType: undefined,
  constraint: undefined,
  file: 'deadlock.c',
  line: '1146',
  routine: 'DeadLockReport' }

Can I get the query and / or user of the other process to isolate and identify the problem? Or something if it makes sense.

mysql – Need help with the indexes for 2 slow WordPress queries

Here are my 2 slow questions that I want to improve.

SELECT object_id, term_taxonomy_id
FROM wp_term_relationships
INNER JOIN wp_posts
ON object_id = ID
WHERE term_taxonomy_id IN (525627,516360,525519,535782,517555,525186,517572,549564,1,517754,541497,541472,525476,549563,517633,524859,702393,541604,543483,524646,525001,550518,541516,525244,549565,517376,535783,524642,25,533395,533537,525475,2,705306,524684,525065,939122,541603,525523,533491,541590,702713,550724,525243,533634,525122,541498,549586,546982,21,524643,541478,525435,535784,541471,516611,535781,541638,516142,533416,546984,524999,533453,524682,704994,516579,516189,524644,517378,525185,541508,517634,705305,524858,517632,541637,517699,525064,517573,772367,516609,517375,525474,507436,524918,517635,541929,22,54,53,705119,524685,524683,516577,536343,191228,524915,524917,516298,541573,546983,515904,541601,56,517377,524645,517707,515905,516297,515903,517708,533635,516296,516578,517750,517554,516016,525123,533538,541625,525187,705307,55,191226,19,24,516299,541466,524916,772366,555654,516612,541503,191227,550302,991853,920642,191229,535829,525582,525524,524919,524720,525841,517636,541504,525184,525520,541562,525433,541563,516610)
AND post_type IN ('post')
AND post_status = 'publish' +
_pad_term_counts()
Theme   259514  2.0440

SELECT wp_posts.ID
FROM wp_posts
LEFT JOIN wp_term_relationships
ON (wp_posts.ID = wp_term_relationships.object_id)
WHERE 1=1
AND wp_posts.ID NOT IN (391534)
AND ( wp_term_relationships.term_taxonomy_id IN (2,516296,517375,517376,517377,517378,517554,517555,517572,517573,517632,517633,517634,517635,517636,517699,517707,517708,517750,517754,524858,524859,524915,524916,524917,524918,524919,524999,525001,525064,525065,525185,525186,525187,525519,525520,525523,525524,525582,525841,533395,533416,533453,535782,535783,535784,535829,536343,549563,549564,549565,549586,550302,550518,550724,555654,702393,702713,704994,705119,705305,705306,705307,772366,772367,920642,939122,991853) )
AND wp_posts.post_type = 'post'
AND ((wp_posts.post_status = 'publish'))
GROUP BY wp_posts.ID
ORDER BY wp_posts.post_date DESC
LIMIT 0, 6

So I thought adding indexes for wp_term_relationships (object_id, term_taxonomy_id) and wp_posts (post_type, post_status, id, post_date) could improve this, but how?

Do you have any idea how to go about it?