Data Compression – How to compress a table that knows which queries to run on it?

There is a table with four different fields: Location, Category, Type, Price. There are hundreds of places and categories, but only four types and six hundred prices. A single query is used for this table: Find the type and price for the specified location and category.

The original table occupies about ten megabytes, and I want to compress data. Since I could represent location and category as a single number, where location would take the first i-bits and category the last f-bits, and the representation of (type, price) pairs as a single dictionary would not take up much space, this could original problem set as represented by equations such as f (x) = N. But I am not sure which function found with interpolation techniques would be smaller than original data.

My question is: what are the known ways to solve these kinds of problems?
Is interpolation a step in the right direction?

I also think what original data might be rendered as an image, but I do not understand the image compression techniques enough to find one that does not decompress the entire image to retrieve the value of a single pixel.

It is also necessary to find the simplest solution that I could port in four different programming languages ​​in an observable time.

Need help finding Sharepoint Rest API queries?

Using the Sharepoint REST API with a query similar to https: // {tentant} .sharepoint.com/api/v2.0/sites/ {site-id} / drives / root / search (q = & # 39; {text} & # 39;) I can only get results from one location / drive at a time. There is also no way to filter directly by file type, modification date or any other property.

Using the Sharepoint built-in search function https: // {Tenant} .sharepoint.com / _api / search / query? Querytext = & # 39; sharepoint & # 39 ;, if I search, I'll answer below.

"{" error_description ":" Invalid issuer or invalid signature. "}"

and the error message says: Method failed: (/ _api / search / query) with code: 401 – Invalid username / password combination and status code is 401

My oAuth credentials are fine and work for individual site / drive APIs, but not for global pausing search.

Note: After reviewing the documentation, I found that I need to add full Sites.FullControl.All permissions, but nothing helps me.

postgresql – array containment queries in postgres

Question about the GIN index of the array.

I have 2 million lines in a work table. And I have to find work that a user can do based on their skills. The user will have more and more skills.

Started with the standard method of RDMS, but the performance of the queries is bad. Therefore, postgres supports array containment queries when searching for other found options, and arrays can also be indexed.

Table:

CREATE TABLE
    work
    (
        work_id TEXT DEFAULT nextval('work_id_seq'::regclass) NOT NULL,
        priority_score BIGINT NOT NULL,
        work_data JSONB,
        created_date TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
        current_status CHARACTER VARYING,
        PRIMARY KEY (work_id)
    );

Index:

CREATE INDEX test_gin_1 ON work USING gin(jsonarray2intarray((work_data ->> 'skills'::text);

Function: 

CREATE OR REPLACE FUNCTION jsonarray2intarray" (text)  RETURNS integer()
  IMMUTABLE
AS $body$
SELECT translate($1, '()', '{}')::int()
$body$ LANGUAGE sql

Sample data:

282941 1564 {"Skills": (213, 311, 374, 554)}

The answer is slow with the following query. There is only one record with 254,336,391,485 as a skill array

with T as (
SELECT   work_id,
        priority_score,
        current_status,
        work_data
FROM     work
WHERE    jsonarray2intarray( work.work_data ->> 'skills') <@ '{254,336,391,485 }'
AND      work.current_status = 'ASSIGNABLE'
ORDER BY priority_score DESC, created_date  ) 
select * from t  LIMIT 1 FOR UPDATE skip locked
Limit  (cost=45095.54..45095.56 rows=1 width=296) (actual time=3776.169..3776.170 rows=1 loops=1)                                                                                                                                                                                                                                                                                                                    
  Output: t.work_id,t.priority_score, t.current_status,t.work_data                                                                                                                                                                            
  CTE t                                                                                                                                                                                                                                                                                                                                                                                                              
    ->  Sort  (cost=45059.29..45095.54 rows=14503 width=325) (actual time=3776.166..3776.166 rows=1 loops=1)                                                                                                                                                                                                                                                                                                         
          Output: work.work_id,work.priority_score, work.current_status,work.work_data        
          Sort Key: work.priority_score DESC, work.created_date                                                                                                                                                                                                                                                                                                                                    
          Sort Method: quicksort  Memory: 25kB                                                                                                                                                                                                                                                                                                                                                                       
          ->  Bitmap Heap Scan on work  (cost=524.44..41872.83 rows=14503 width=325) (actual time=37.718..3776.159 rows=1 loops=1)                                                                                                                                                                                                                                             
                Output: work.work_id,work.priority_score, work.current_status,work.work_data  
                Recheck Cond: (jsonarray2intarray((work.work_data ->> 'skills'::text)) <@ '{254,336,391,485}'::integer())                                                                                                                                                                                                                                                                               
                Rows Removed by Index Recheck: 1072296                                                                                                                                                                                                                                                                                                                                                               
                Filter: ((work.current_status)::text = 'ASSIGNABLE'::text)                                                                                                                                                                                                                                                                                                                             
                Heap Blocks: exact=41243 lossy=26451                                                                                                                                                                                                                                                                                                                                                                 
                ->  Bitmap Index Scan on test_gin_1  (cost=0.00..520.81 rows=14509 width=0) (actual time=30.699..30.699 rows=154888 loops=1)                                                                                                                                                                                                                                                                         
                      Index Cond: (jsonarray2intarray((work.work_data ->> 'skills'::text)) <@ '{254,336,391,485}'::integer())                                                                                                                                                                                                                                                                           
  ->  CTE Scan on t  (cost=0.00..290.06 rows=14503 width=296) (actual time=3776.168..3776.168 rows=1 loops=1)                                                                                                                                                                                                                                                                                                        
        Output: t.work_id,t.priority_score, t.current_status,t.work_data                                                                                                                                                                      
Planning time: 0.161 ms                                                                                                                                                                                                                                                                                                                                                                                              
Execution time: 3776.202 ms                                                                                                                                                                                                                                                              

The same query with different inputs is fast. There are approximately 26,000 records with the skills 101, 103:

with T as (
SELECT   work_id,
        priority_score,
        current_status,
        work_data
FROM     work
WHERE    jsonarray2intarray( work.work_data ->> 'skills') <@ '{101, 103 }'
AND      work.current_status = 'ASSIGNABLE'
ORDER BY priority_score DESC, created_date  ) 
select * from t  LIMIT 1 FOR UPDATE skip locked
Limit  (cost=45076.55..45076.57 rows=1 width=296) (actual time=116.185..116.186 rows=1 loops=1)                                                                                                                                                                                                                                                                                                                      
  Output: t.work_id,t.priority_score, t.current_status,t.work_data                                                                                                                                                                         
 CTE t                                                                                                                                                                                                                                                                                                                                                                                                              
    ->  Sort  (cost=45040.26..45076.55 rows=14513 width=325) (actual time=116.182..116.182 rows=1 loops=1)                                                                                                                                                                                                                                                                                                           
          Output: work.work_id,work.priority_score, work.current_status,work.work_data        
          Sort Key: work.priority_score DESC, work.created_date                                                                                                                                                                                                                                                                                                                                    
          Sort Method: external merge  Disk: 8088kB                                                                                                                                                                                                                                                                                                                                                                  
          ->  Bitmap Heap Scan on work  (cost=476.52..41853.05 rows=14513 width=325) (actual time=9.223..94.591 rows=26301 loops=1)                                                                                                                                                                                                                                            
                Output: work.work_id,work.priority_score, work.current_status,work.work_data  
                Recheck Cond: (jsonarray2intarray((workd.work_data ->> 'skills'::text)) <@ '{101,103}'::integer())                                                                                                                                                                                                                                                                                       
                Filter: ((workd.current_status)::text = 'ASSIGNABLE'::text)                                                                                                                                                                                                                                                                                                                             
                Rows Removed by Filter: 1357                                                                                                                                                                                                                                                                                                                                                                         
                Heap Blocks: exact=2317                                                                                                                                                                                                                                                                                                                                                                              
                ->  Bitmap Index Scan on test_gin_1  (cost=0.00..472.89 rows=14519 width=0) (actual time=4.638..4.638 rows=39871 loops=1)                                                                                                                                                                                                                                                                            
                      Index Cond: (jsonarray2intarray((workd.work_data ->> 'skills'::text)) <@ '{101,103}'::integer())                                                                                                                                                                                                                                                                                   
  ->  CTE Scan on t  (cost=0.00..290.26 rows=14513 width=296) (actual time=116.184..116.184 rows=1 loops=1)                                                                                                                                                                                                                                                                                                          
        Output: t.work_id,t.priority_score, t.current_status,t.work_data                                                                                                                                                                       
Planning time: 0.160 ms                                                                                                                                                                                                                                                                                                                                                                                              
Execution time: 117.278 ms                                                                                                                               

I am looking for suggestions to get a consistent answer.

NOTE:
Approach that is not postgres-specific:

The query takes about 40 to 50 seconds, which is very bad

I used two tables

CREATE TABLE public.work
(
    id integer NOT NULL DEFAULT nextval('work_id_seq'::regclass),
    priority_score BIGINT NOT NULL,
    work_data JSONB,
    created_date TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
    current_status CHARACTER VARYING,
    PRIMARY KEY (work_id)
)

CREATE TABLE public.work_data
(
    skill_id bigint,
    work_id bigint

)

Query:

 select work.id 
    from work  
       inner join work_data on (work.id=work_data.work_id) 
    group by work.id 
    having sum(case when work_data.skill_id in (2269,3805,828,9127) then 0 else 1 end)=0 

magento 2.1 – Problem with Magento2 queries

I have queries for each attribute at the bottom of my pages. There were more than 11000 requests on my magento2 pages. Can someone help me?

SELECT eav_attribute.*
FROM eav_attribute
WHERE (eav_attribute.attribute_id='70')
SELECT eav_entity_type.additional_attribute_table
FROM eav_entity_type
WHERE (entity_type_id = :entity_type_id)
SELECT catalog_eav_attribute.*
FROM catalog_eav_attribute
WHERE (attribute_id = :attribute_id)

Docker – DNS attaching the local domain to random queries

    Aug  9 23:14:45 dnsmasq(11657): reply registry-1.docker.io is 54.88.231.116
    Aug  9 23:14:45 dnsmasq(11657): reply registry-1.docker.io is 100.24.246.89
    Aug  9 23:14:45 dnsmasq(11657): reply registry-1.docker.io is 34.197.189.129
    Aug  9 23:14:45 dnsmasq(11657): reply registry-1.docker.io is 3.221.133.86
    Aug  9 23:14:45 dnsmasq(11657): reply registry-1.docker.io is 3.224.11.4
    Aug  9 23:14:45 dnsmasq(11657): reply registry-1.docker.io is 54.210.105.17
    Aug  9 23:14:50 dnsmasq(11657): query(A) gitlab.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:50 dnsmasq(11657): forwarded gitlab.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:50 dnsmasq(11657): reply gitlab.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:50 dnsmasq(11657): query(AAAA) gitlab.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:50 dnsmasq(11657): forwarded gitlab.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:50 dnsmasq(11657): reply gitlab.mydomain.com.home is NODATA-IPv6
    Aug  9 23:14:51 dnsmasq(11657): query(A) registry.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:51 dnsmasq(11657): forwarded registry.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:51 dnsmasq(11657): query(AAAA) registry.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:51 dnsmasq(11657): forwarded registry.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:51 dnsmasq(11657): reply registry.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:51 dnsmasq(11657): reply registry.mydomain.com.home is NODATA-IPv6
    Aug  9 23:14:51 dnsmasq(11657): query(AAAA) registry.mydomain.com.home from 192.168.1.21
    Aug  9 23:14:51 dnsmasq(11657): cached registry.mydomain.com.home is NODATA-IPv6
    Aug  9 23:14:51 dnsmasq(11657): query(A) gitlab.mydomain.com.home from 192.168.1.21
    Aug  9 23:14:51 dnsmasq(11657): cached gitlab.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:52 dnsmasq(11657): query(A) registry.mydomain.com.home from 192.168.1.21
    Aug  9 23:14:52 dnsmasq(11657): cached registry.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:52 dnsmasq(11657): query(A) registry-1.docker.io.home from 192.168.1.21
    Aug  9 23:14:52 dnsmasq(11657): forwarded registry-1.docker.io.home to 192.168.1.2
    Aug  9 23:14:52 dnsmasq(11657): query(AAAA) registry-1.docker.io.home from 192.168.1.20
    Aug  9 23:14:52 dnsmasq(11657): forwarded registry-1.docker.io.home to 192.168.1.2
    Aug  9 23:14:52 dnsmasq(11657): reply registry-1.docker.io.home is NXDOMAIN
    Aug  9 23:14:52 dnsmasq(11657): reply registry-1.docker.io.home is NODATA-IPv6

These requests come from a Kubernetes pod. in the pod it is config

bash-4.4$ cat /etc/resolv.conf
nameserver 10.96.0.10
search gitlab-managed-apps.svc.cluster.local svc.cluster.local cluster.local home
options ndots:5

When I do a nslookup, it seems to work

bash-4.4$ nslookup registry.mydomain.com
nslookup: can't resolve '(null)': Name does not resolve

Name:      registry.mydomain.com
Address 1: 104.18.61.234
Address 2: 104.18.60.234
Address 3: 2606:4700:30::6812:3dea
Address 4: 2606:4700:30::6812:3cea
bash-4.4$

but I still get attached .home

Aug  9 23:44:13 dnsmasq(11657): query(AAAA) gitlab.mydomain.com.home from 192.168.1.20
Aug  9 23:44:13 dnsmasq(11657): cached gitlab.mydomain.com.home is NXDOMAIN
Aug  9 23:44:13 dnsmasq(11657): query(A) gitlab.mydomain.com.home from 192.168.1.21
Aug  9 23:44:13 dnsmasq(11657): cached gitlab.mydomain.com.home is NODATA-IPv4

The DNS of the cubic host is:

root@node-a:/etc$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.

nameserver 127.0.0.53
search home

I use Coredns with the following configuration:

apiVersion: v1
data:
  Corefile: |
    mydomain.com {
        log
        forward . 1.1.1.1 1.0.0.1 9.9.9.9
        reload
    }
    .:53 {
        log
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        #proxy . /etc/resolv.conf
        forward . 192.168.1.2:53 {
            except mydomain.com
        }
        cache 30
        loop
        reload
    }

I have tried editing configs to poop to 1.1.1.1, no dice. For some reason, .home is appended to the end of domain name somewhere

tail -f pihole.log |grep alpine
Aug 10 00:03:59 dnsmasq(11657): query(AAAA) dl-cdn.alpinelinux.org.home from 192.168.1.20
Aug 10 00:03:59 dnsmasq(11657): cached dl-cdn.alpinelinux.org.home is NXDOMAIN
Aug 10 00:03:59 dnsmasq(11657): query(A) dl-cdn.alpinelinux.org.home from 192.168.1.20
Aug 10 00:03:59 dnsmasq(11657): cached dl-cdn.alpinelinux.org.home is NODATA-IPv4
Aug 10 00:03:59 dnsmasq(11657): query(A) dl-cdn.alpinelinux.org.home from 192.168.1.21
Aug 10 00:03:59 dnsmasq(11657): cached dl-cdn.alpinelinux.org.home is NODATA-IPv4
Aug 10 00:03:59 dnsmasq(11657): query(AAAA) dl-cdn.alpinelinux.org.home from 192.168.1.21
Aug 10 00:03:59 dnsmasq(11657): cached dl-cdn.alpinelinux.org.home is NXDOMAIN

My DNS path is as follows:

Pod -> CoreDNS -> Pihole (for ads) -> Bind9 -> Cloudflared 1.1.1.1/1.0.0.1

Given that .home is appended in the pihole (and can not be resolved), I do not think the problem is bind9 or cloudflared, but either the pod configuration, the coredns or the pihole. Where does it come from?

I've somewhat circumvented the problem (by now) by changing the gitlab runner deployment to use the following DNS properties:

dnsConfig:
  nameservers:
    - 1.1.1.1
    - 9.9.9.9
  options:
    - name: ndots
      value: "2"
    - name: edns0
  dnsPolicy: None

Thanks a lot!

unity – How do I access specific child indexes (directly, without tags or search queries)?

I have a game object whose prefabricated house is very specific. It is a conveyor belt. I intend to arrange them consistently and to animate them all at once using a script. At the moment I'm trying to find the normalized price vector between two children, TargetStart and TargetEnd (see pictures below):

target start
TargetEnd

You can see that both objects are child objects of the first prefabricated object. Since all these instances have the same structure, I would like to calculate them heading for the conveyor by accessing the positions of these children by their indexes relative to the parent element (which sounds like it should be simple and more efficient than searching by name or day). I expect a structure like this (pseudocode):

// heading = targetEnd's position - targetStart's position
vector3 heading = gameObject.GetChild(0)(1).position - gameObject.GetChild(0)(0).position;
vector3 direction = heading / heading.magnitude;

How does it work? I want to know this so that I can access every child I need for each object through the index of the child.

TO EDIT: Puppy, I feel silly now. I thought, I stumbled over it again and again, but the answer is in the documentation example. This is how I do it:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class ConveyorBelt : MonoBehaviour
{
    public bool isMoving = true;
    public float speed = 0.1f;
    private float offset = 0f;
    private Vector3 targetStart;
    private Vector3 targetEnd;
    private Vector3 heading;
    private Vector3 direction;
    // Start is called before the first frame update
    void Start()
    {
        targetStart = gameObject.transform.GetChild(0).GetChild(0).position;
        targetEnd = gameObject.transform.GetChild(0).GetChild(1).position;
        heading = targetEnd - targetStart;
        direction = heading / heading.magnitude;
    }
    .
    .
    .
}

Database Theory – How to decide if two conjunctive queries can not have a common result

Consider the following queries:

Q1: Age> 18 & Age <24 & gender = & # 39; male & # 39;

Q2: Age> 25 & gender = & # 39; male & # 39; & major = & # 39; CS & # 39;

Q3: Age = 20 & major = & # 39; CS & # 39;

Of course with every database instance $ D $we can decide $ Q1 (D) cap Q2 (D) = emptyset $ and $ Q2 (D) cap Q3 (D) = emptyset $, However, we can not decide on the result $ Q1 (D) cap Q3 (D) $,

I'm aware of that Query ContainmentThe result of a query is always a subset of the results of another query. I am also aware query equivalenceThe results of two queries should be exactly the same for each database instance.

However, I am looking for an algorithm / terminology for my case in which before evaluating the two queries I can decide that the intersection of their results is always empty, Indeed, due to contradictory selection conditionsthey can not have a common result.

DNS Domain – Securing DNS by Blocking Queries AND Responses [Dnscrypt questions]

If you visit facebook.com, ask update.fbsbx.com. s.update.fbsbx.com is a CNAME for s.agentanalytics.com. Currently, the only way to block s.agentanalytics.com is to block s.update.fbsbx.com via hosts. Windows DNS clients, as well as decryptors such as Dnscrypt, can not block CNAME replies' parent domains.

13:19:30 dnsmasq[1211]: query[A] s.update.fbsbx.com from 192.168.50.142
13:19:30 dnsmasq[1211]: forwarded s.update.fbsbx.com to 127.0.0.1
13:19:30 dnsmasq[1211]: reply s.update.fbsbx.com is 
13:19:30 dnsmasq[1211]: reply s.agentanalytics.com is
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 52.20.233.11
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 35.170.177.215
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 34.235.44.232
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 34.194.252.192
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 18.206.130.128
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 52.202.107.183
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 18.209.97.44
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 35.173.82.169
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 23.22.178.204
13:19:30 dnsmasq[1211]: reply agentanalytics.com is 18.206.103.1

Sometimes there are several CNAMES who reveal their actual hidden associations in answers. Example:

13:55:28 dnsmasq[26607]: query[A] su.itunes.apple.com from 192.168.50.96
13:55:28 dnsmasq[26607]: forwarded su.itunes.apple.com to 127.0.0.1

13:55:29 dnsmasq[26607]: reply su.itunes.apple.com is 
13:55:29 dnsmasq[26607]: reply su-cdn.itunes-apple.com.akadns.net is 
13:55:29 dnsmasq[26607]: reply su-applak.itunes-apple.com.akadns.net is 
13:55:29 dnsmasq[26607]: reply su.itunes.apple.com.edgekey.net is 
13:55:29 dnsmasq[26607]: reply e673.dsce9.akamaiedge.net is 184.50.162.217

13:55:29 dnsmasq[26607]: query[A] xp.apple.com from 192.168.50.96
13:55:29 dnsmasq[26607]: forwarded xp.apple.com to 127.0.0.1
13:55:29 dnsmasq[26607]: reply xp.apple.com is 
13:55:29 dnsmasq[26607]: reply xp.itunes-apple.com.akadns.net is 
13:55:29 dnsmasq[26607]: reply xp.apple.com.edgekey.net is 
13:55:29 dnsmasq[26607]: reply e17437.dscb.akamaiedge.net is 23.214.192.96

For example, DNSCRYPT allows blocking of outbound domain requests with wildcards [analytics], but does not block automatically incoming responses or caching from s.agentanalytics.com ips. For example, if you block s.agentanalytics.com in Windows hosts or dnscrypt, you can still access it through s.update.fbsbx.com.

I showed the dnscrypt coder how this parsing domain bypassed its placeholder protection, and he told me, "These entries are not in the parent zone and will be ignored by any stub resolvers." And here he goes into detail

He also stated "I think you were confused by what dnsmasq logs, which is admittedly very confusing, there's only one question A on update.fbsbx.com and one matching CNAME response to update.fbsbx.com, the rest is from resolvers ignored because he is not in the parent zone. "

However, if you demonstrate robtex and dnsmasq to contain an Analytics domain at agentanalytics.com, https://www.robtex.com/dns-lookup/s.update.fbsbx.com

If these IP addresses are ignored by the stub resolver [which contains the Windows DNS client], as previously suggested, they are initially not cached. I'm also curious if it's possible that some of these IPs could possibly be used by a state party / MITM, as suggested here. When surfing on Facebook, I saw at update.fbsbx.com in Umatrix, which IP would then be assigned to this domain with the exception of ip addresses to agentanalytics.com … well of course it would be to agentanalytics.com.

S.update.fbsbx.com is simply used by dnscrypt and dnsmasq etc instead of s.agentanalytics.com, but refers to IPs associated with s.agentanalytics.com.

If DNSCRYPT wildcard blocks deny caching of these CNAME IP responses and block their parent domain, their networks could be better protected.

The question is clear Am I wrong in my claims, am I missing something?

Here's another example of 21 queries that occur when an iPhone instantly connects to WIFI. The answers contain 72 domains and IPs that are not in the parent domain. He says that everything is ignored.

Here https://pastebin.com/GYSEw1dY

javascript – Horizontal bar chart for time series to visualize queries

I need to create a horizontal bar chart for time series that looks something like this. Each bar represents a query that runs in the cluster. The length of the bar indicates the duration. Each query has start and end time. Multiple queries can have either the same start time or same end time, or both. Queries may be executed in parallel.

Enter image description here

I use Highcharts / Highstocks Chart Library and wonder what kind of highchart I have to use to achieve what I need. Please advise.

SEO – Will a Sitemap ensure that pages serviced by AJAX queries are crawled?

I have a website where I publish articles that I started just 2 weeks ago. I'd like to keep the pages as clean as possible and load more content (links to other articles) from AJAX requests to user action (for now, clicks). I read a bit. Most of the articles and blog posts on this topic were outdated. I understand that Google used to support crawling AJAX requests, but not anymore. Some papers also recommend using methods that provide content by pagination. I also read about sitemaps. I know that it gives search engine crawlers an indication of which pages to search.

However, will crawlers find inconsistencies because these links are out of reach and can only be accessed by clicking the Load More button? Does a sitemap make sure the crawlers visit the URL?