How to create a MongoDB index with isodate?

I’ve create and index for a query and its running is less then 1 second to retrieve millions of rows, but as soon as I add greater than ISODATE, it doesnt use indexes anymore.

the query is this one:

db.getCollection("Coll").find({"Field1.date" : {
                    $gte: ISODate("2019-11-27T00:00:00.000Z"),
                    $lte: ISODate("2019-11-28T00:00:00.000Z")                  
                }},{
    _id: 1,
    "Field2": 1,
    "Field3":1,
     Field4: 1
})

and i created an index like this:

//    db.Coll.createIndex(
//    {
//           _id: 1,
//    "Field2": 1,
//     Field4: 1,
//    "Field1.date":1
//    },
//    {background:true , name: "idx_name_date"})
//    

but it seems this “field1.date” doesnt work with ISODate.

ag.algebraic geometry – Does there exist a GRR-like generalization of the AS Index Theorem?

The Hirzebruch-RiemannTheorem (HRR) expresses an analytic/algebraic invariant, namely the Euler-Poincaré characteristic of a vector bundle $V$ over a compact complex/algebraic manifold $X$, as the evaluation of a cohomological expression. It has different manifestations in the analytic and algebraic categories; the general form in the analytic category is something like
$$
chi(X,V) = T(X,V),
$$

where $X$ is a compact complex manifold, $V$ a holomorphic vector bundle, $chi(M,V)$ $=$ $sum_{i ge 0} (-1)^idim_{mathbb{C}}H^i(X,mathcal{O}(V))$
the holomorphic Euler-Poincaré characteristic, and $T(X,V)$ a particular cohomological expression (see (1), § 21; (2)).

Subsequently, there were two important generalizations of this result:

  • The Grothendieck Rieman-Roch Theorem (GRR). This shifts the focus from objects $(X,V)$ to morphisms $f:X rightarrow Y$, and to coherent sheaves, which are amenable to being pushed forward, and so its algebraic manifestion appears as a commutative diagram
    $require{AMScd}$
    begin{CD}
    K_{omega}(X) @>f_!>> K_{omega}(Y)\
    @Vtau_XVV @VVtau_YV\
    K_{text{coh}}(X) @>>f_*> K_{text{coh}}(Y)
    end{CD}

    Here, $f:Y rightarrow Y$ is a morphism of algebraic varieties, $K_{omega} :=(K_{omega}(-),(-)_*)$ an algebraic theory built from coherent sheaves, $K_{text{coh}} := (K_{text{coh}}(-), (-)_!)$ a cohomological theory, and $tau:K_{omega}rightarrow K_{text{coh}}$ a natural transformation of theories (see (3); (4) Theorem 18.3; (5)). This generalizes (HRR) insofar as an appropriate instance of (HRR) is obtained from an appropriate instance of (GRR) by applying it to the morphism $X rightarrow text{pt}$.

  • The Atiyah-Singer Index Theorem (ASI). This shifts the focus from objects $(X,V)$ to (pseudo)elliptic operators $D:mathcal{C}^{infty}(X,E) rightarrow mathcal{C}^{infty}(X;F)$ on real $mathcal{C}^{infty}$-vector bundles over a compact real $mathcal{C}^{infty}$-manifold $X$. It takes the form
    $$
    text{index}(D) = T(X;E,F)
    $$

    where
    $$
    text{index}(D) := dim ker(D) – dim text{coker}(D)
    $$

    is an analytical invariant and $T(X;E,F)$ a cohomological (topological) invariant (see (6), Theorem (6.8)). Then (HRR) arises by specializing $D$ to the Dolbeault operator $overline{partial}$ (see (7), Theorem (4.3).

Question:

Does there exist a generalization of (ASI) to a relative form, (GAS), say, in a similar fashion as (HRR) generalizes to (GRR), so that we have a square of generalizations
$require{AMScd}$
begin{CD}
(HRR) @>>> (ASI)\
@VVV @VVV\
(GRR) @>>> (GAS)
end{CD}

so that (GAS) specializes, on one hand, to (ASI), an, on the other hand, to (GRR) (and, as the cherry on the cake, do everything equivariantly).

_

(1) Hirzebruch, F. —
Topological Methods in Algebraic Geometry
(Reprint of the 1978 Edition). Springer 1978.

(2) O’Brian, N.R. et al. —
Hirzebruch-Riemann-Roch for coherent sheaves, Amer. J. Math. 103, (1981), 253-271.

(3) Borel, A, & Serre, J.-P. —
Le théorème de Riemann-Roch (d’après Grothendieck), Bull. Soc. Math. France
86
(1958), 97 – 136.

(4) Fulton, W. —
Intersection Theory (2nd Ed.), Springer 1998.
(Reprint of the 1978 Edition). Springer 1978.

(5) Baum, P. et al. —
Riemann-Roch and topological K-theory for singular varieties, Acta Math. (43) (1979),155-192.

(6) Atiyah, M.F & Singer, I.M. —
The index of elliptic operators I. Ann. Math. 87 (1968),
484-530.

(7) Atiyah, M.F & Singer, I.M. —
The index of elliptic operators III. Ann. Math. 87 (1968),
484-530.

Apache download index only when not entered manually

no change made to any configuration file, the server started acting like this even after I restarted it.
when I enter manually localhost/index.html everything is displayed fine.
but when I type just localhost
then I get a download prompt.
I downloaded the file out of curiosity and opened it with a text editor
and surprisingly found the index.html content inside of it !
any explaination why this behavior exists please ? and how to stop it
snap from httpd.conf file

<Directory "${INSTALL_DIR}/www/">
    Options +Indexes +FollowSymLinks +Multiviews
    AllowOverride all
    Require local
</Directory> 

and from httpd-vhosts.conf file

<VirtualHost *:80>
  ServerName localhost
  ServerAlias localhost
  DocumentRoot "${INSTALL_DIR}/www"
  <Directory "${INSTALL_DIR}/www/">
    Options +Indexes +Includes +FollowSymLinks +MultiViews
    AllowOverride All
    Require local
  </Directory>
</VirtualHost>

optimization – Why does adding an index increase the execution time in SQLite?

I’ll just show you an example. Here candidates is a table of 1000000 candidates from 1000 teams and their individual scores. We want a list of all teams and whether the total score of all candidates within each team is within the top 50. (Yeah this is similar to the example from another question, which I encourage you to look at, but I assure you that it is not a duplicate)

Note that all CREATE TABLE results AS ... statements are identical, and the only difference is the presence of indices. These tables are created (and dropped) to suppress the query results so that they won’t make a lot of noise in the output.

------------
-- Set up --
------------

.open delete-me.db    -- A persistent database file is required

.print ''
.print '(Set up)'

DROP TABLE IF EXISTS candidates;

CREATE TABLE candidates AS
WITH RECURSIVE candidates(team, score) AS (
    SELECT ABS(RANDOM()) % 1000, 1
    UNION
    SELECT ABS(RANDOM()) % 1000, score + 1
    FROM candidates
    LIMIT 1000000
)
SELECT team, score
FROM candidates;


-------------------
-- Without Index --
-------------------

.print ''
.print '(Without Index)'

DROP TABLE IF EXISTS results;

ANALYZE;

.timer ON
.eqp   ON
CREATE TABLE results AS
WITH top_teams_verbose(top_team, total_score) AS (
    SELECT team, SUM(score)
    FROM candidates
    GROUP BY team
    ORDER BY 2 DESC
    LIMIT 50
),
top_teams AS (
    SELECT top_team
    FROM top_teams_verbose
)
SELECT team, SUM(team IN top_teams)
FROM candidates
GROUP BY team;
.eqp   OFF
.timer OFF


------------------------------
-- With Single-column Index --
------------------------------

.print ''
.print '(With Single-column Index)'

CREATE INDEX candidates_idx_1 ON candidates(team);

DROP TABLE IF EXISTS results;

ANALYZE;

.timer ON
.eqp   ON
CREATE TABLE results AS
WITH top_teams_verbose(top_team, total_score) AS (
    SELECT team, SUM(score)
    FROM candidates
    GROUP BY team
    ORDER BY 2 DESC
    LIMIT 50
),
top_teams AS (
    SELECT top_team
    FROM top_teams_verbose
)
SELECT team, SUM(team IN top_teams)
FROM candidates
GROUP BY team;
.eqp   OFF
.timer OFF


-----------------------------
-- With Multi-column Index --
-----------------------------

.print ''
.print '(With Multi-column Index)'

CREATE INDEX candidates_idx_2 ON candidates(team, score);

DROP TABLE IF EXISTS results;

ANALYZE;

.timer ON
.eqp   ON
CREATE TABLE results AS
WITH top_teams_verbose(top_team, total_score) AS (
    SELECT team, SUM(score)
    FROM candidates
    GROUP BY team
    ORDER BY 2 DESC
    LIMIT 50
),
top_teams AS (
    SELECT top_team
    FROM top_teams_verbose
)
SELECT team, SUM(team IN top_teams)
FROM candidates
GROUP BY team;
.eqp   OFF
.timer OFF

Here is the output

(Set up)

(Without Index)
QUERY PLAN
|--SCAN TABLE candidates
|--USE TEMP B-TREE FOR GROUP BY
`--LIST SUBQUERY 3
   |--CO-ROUTINE 1
   |  |--SCAN TABLE candidates
   |  |--USE TEMP B-TREE FOR GROUP BY
   |  `--USE TEMP B-TREE FOR ORDER BY
   `--SCAN SUBQUERY 1
Run Time: real 0.958 user 0.923953 sys 0.030911

(With Single-column Index)
QUERY PLAN
|--SCAN TABLE candidates USING COVERING INDEX candidates_idx_1
`--LIST SUBQUERY 3
   |--CO-ROUTINE 1
   |  |--SCAN TABLE candidates USING INDEX candidates_idx_1
   |  `--USE TEMP B-TREE FOR ORDER BY
   `--SCAN SUBQUERY 1
Run Time: real 2.487 user 1.108399 sys 1.375656

(With Multi-column Index)
QUERY PLAN
|--SCAN TABLE candidates USING COVERING INDEX candidates_idx_1
`--LIST SUBQUERY 3
   |--CO-ROUTINE 1
   |  |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_2
   |  `--USE TEMP B-TREE FOR ORDER BY
   `--SCAN SUBQUERY 1
Run Time: real 0.270 user 0.248629 sys 0.014341

While the covering index candidates_idx_2 does help, it seems that the single-column index candidates_idx_1 makes the query significantly slower, even after I ran ANALYZE;. It’s only 2.5x slower in this case, but I think the factor can be made greater if you fine-tune the number of candidates and teams.

Why is it?

magento2 – Auto Increment column do not have index. Column – “value_id”, table – “catalog_product_entity_text”

I’m getting the following error after i upgrade magento version to 2.3.5 and try to run php bin/magento setup:upgrade

Auto Increment column do not have index. Column – “value_id”, table –
“catalog_product_entity_text”

I tried to add an index as follow in catalog_product_entity_text table but still get the same error message.

Action  Keyname Type    Unique  Packed  Column  Cardinality Collation   Null    Comment
Edit Edit   Drop Drop   value_id    BTREE   Yes No  value_id    1524    A   No  

Ubuntu server with apache2 shows index of/

i am fairly new to the world of ubuntu and apache2. And i’ve become a little confused of the directories and some other stuff.

  1. I have read that the default web directory is /var/www/. I have my project in /home/user/project. Is that wrong?

  2. Also i’ve actually gotten my project to work, a django website, but right after i got the SSL-certification i stopped working. I have tried to change the document root to the path where my project is, in every file. (Apache2.conf, website.conf, website-le-ssl.conf). Can someone tell how i can get my site to work? NOTE: I am greeted with the page: 403 Forbidden: You don’t have permission to access this resource.

  3. I have tried some different configurations regarding my files (Mentioned above), and i have also been greeted by the: index of/ page.

I really have no idea what to do now. I have searched on google, and the varoius forum, but nothing that i have tried works for me.

postgresql – Postgres use index to create index

I have a partial index where a certain column is not null. This is a very small percentage of the table. Thanks to this index, SELECT * FROM table WHERE column IS NOT NULL is incredibly fast (5 milliseconds). But the table has hundreds of millions of rows.

If I want to create a second index on the same set of rows (where that same column is not null), how can I make Postgres use the first index that already exists to find those rows? Currently Postgres just scans the entire table to find them again, which takes many minutes. I can query all the rows in millisceonds, so why can’t CREATE INDEX get them in the same way?

python – List index out of range issue

I have list of elements like (1,3,5,6,8,7)
I want a list of sum of two consecutive elements of the list in a way that Last element is also added with the first element of the list.
I mean in above case, i want this list :
(4,8,11,14,15,8)

But when it comes to the addition of Last and first element during for loop, index out of range occurs.
Consider the following code:

List1 = (1,3,5,6,8,7)
List2 = (List1(i) + List1(i+1) for i in range (len(List1)))

print (List2)

sql server – How can INDEX rebuilds be going parallel when MAXDOP is set to 1

It is running a stored procedure named proc_DefragmentIndices in many (possibly all) of these databases at a time.

The stored procedure rebuilds every index in the database unconditionally.

I assume you don’t have any control of this, but I’d be remiss if I didn’t suggest that you make this stop happening as soon as possible 😀 Use Ola Hallengren’s script, which will at list rebuild or reorg only at specific fragmentation thresholds. Or considering avoiding index rebuilds in favor of statistics updates.

How can INDEX rebuilds be going parallel when MAXDOP is set to 1?

They shouldn’t be. And they really shouldn’t be because you’re using Standard Edition – and parallel index operations are an Enterprise Edition feature. I wonder if there is just something weird about the way this is showing up in Activity Monitor.

Activity Monitor shows the ALTER INDEX commands but sp_WhoIsActive does not. Does anyone know why?

That’s another red flag that something is off here. Index rebuilds should show up just fine in sp_WhoIsActive. Here’s me rebuilding the Posts table (from the StackOverflow2010 sample database) on my laptop:

USE StackOverflow2010;

ALTER INDEX PK_Posts__Id ON dbo.Posts REBUILD;

screenshot of spwhoisactive results

As a further point of investigation, you could query sys.dm_os_tasks directly when you see one of these ALTER INDEX sessions that appears to have gone parallel, to be extra sure whether it’s serial as it should be, or parallel.

optimization – How to create index to improve performance of an aggregate function that creates a table in oracle

I am creating an Oracle ORDS API using APEX_JSON. I recently started using bind variables instead of string concatenation using ||. I am trying to use an in clause in my where condition.

The problems begin here. The field I need to have on the left side of in is a number and the parameter to my stored procedure needs to be varchar2 as it is a comma seperated list of numbers.

Example (edited for brevity)

CREATE OR REPLACE PROCEDURE GET_CATEGORYPRODS (
    PCATEGORYID IN NUMBER,
    COMMASEPPRODUCTIDS IN VARCHAR2
) AS

l_cursor               SYS_REFCURSOR;
v_stmt_str             STRING(5000);
v_name                 NUMBER; --PRODUCT.NAME%TYPE;
v_displayorder         NUMBER; --PRODUCTCATEGORY%TYPE;
BEGIN
 v_stmt_str := 'SELECT 
    P.NAME, 
    PC.DISPLAYORDER
FROM 
    PRODUCT P
INNER JOIN
    PRODUCTCATEGORY PC
ON P.PRODUCTID = PC.PRODUCTID
WHERE 
   PC.CATEGORYID := :CATEGORYID
AND
   (P.PRODUCTID IN (SELECT * FROM TABLE(STRING_TO_TABLE_NUM(:COMMASEPPRODUCTIDS))) -- PREVIOUSLY WHERE || OCCURRED
        OR (:COMMASEPPRODUCTIDS IS NULL))';

s_counter := 0;

OPEN l_cursor FOR v_stmt_str
        USING pcategoryid, commasepproductids, commasepproductids;

FETCH l_cursor INTO
    v_productid,
    v_displayorder;

APEX_JSON.OPEN_ARRAY;
LOOP
    EXIT WHEN l_cursor%notfound;
    apex_json.open_object;
    apex_json.write('ProductID', v_productid);
    apex_json.write('DisplayOrder', v_displayorder);
    apex_json.close_object;
END LOOP;
apex_json.close_all;

END GET_CATEGORYPRODS;

Sample of parameters
'97187,142555,142568,48418,43957,44060,45160,45171,333889,333898'

To handle this problem, I created an aggregate function that takes in a string, splits on the commas, and pipes the row to a custom type.

Custom Type

create or replace type tab_number is table of number;

Aggregate Function

create or replace FUNCTION string_to_table_num (
    p VARCHAR2
)

   RETURN tab_number
   PIPELINED IS
BEGIN
   FOR cc IN (SELECT rtrim(regexp_substr(str, '(^,)*,', 1, level), ',') res
                FROM (SELECT p || ',' str FROM dual)
              CONNECT BY level <= length(str) 
                                  - length(replace(str, ',', ''))) LOOP
      PIPE ROW(lower(cc.res));
   END LOOP;
    
END;

The query slowed down significantly. I figured some optimization was needed but I had never done any sort of optimization before. After some research, I found EXPLAIN PLAN and ran it on the orginal query. I couldn’t get a good answer because of the bind variables, so I decided to run it on the aggregate function.

EXPLAIN PLAN QUERIES

explain plan for select * from TABLE(string_to_table_num('97187,142555,142568,48418,43957,44060,45160,45171,333889,333898'));

SELECT * 
FROM   TABLE(DBMS_XPLAN.DISPLAY);

When I ran EXPLAIN PLAN for the aggregate function the results were:

Plan hash value: 127161297
 
---------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                     |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| STRING_TO_TABLE_NUM |  8168 | 16336 |    29   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

As I stated before, I am a noob to analyzing and optimizing queries, but 8168 Rows and 16336 bytes seems to be a lot for such a simple function. I looked into it, and found that the problem may be the lack of indexing of the pipelined table. I tried to add an index to the type tab_number but it turned it into a PL/SQL object that needed to be declared in a query, not a function.

I am pretty lost with this one. If you have any suggestions for any of the scenarios I mentioned, I am all ears. Thanks in advance.