sql server – Grouping/sorting performance choice between bigint and nvarchar

I want to store a hash-code for a variable-length text field (max 1000 chars) in a database table. The hash-code will be computed and assigned once on insert, and new rows will be inserted very often.

The hash-code will be used mainly for filtering (WHERE), grouping (GROUP BY), and sorting (ORDER BY) in a couple of queries. The database table will hold a few million rows over time, with the probability of identical hash-codes (for identical text) being around 30% (rest being unique).

I have the choice of making the hash-code data type NVARCHAR (SHA1 of text) or BIGINT (converted bytes of SHA1 of text). I think BIGINT will be better in terms of storage space (less pages).

Generally speaking, which of these two data types will be better in terms of performance, considering the operations mentioned above?

performance – MySQL event scheduler waiting on empty queue since server restarted 12 days ago

I noticed a process on the server which has been running for more than 12 days, which I think coincides with the last time MySQL was restarted.

mysql> SHOW FULL PROCESSLIST;

+---------+-----------------+-----------+------+---------+---------+------------------------+-----------------------+
| Id      | User            | Host      | db   | Command | Time    | State                  | Info                  |
+---------+-----------------+-----------+------+---------+---------+------------------------+-----------------------+
|       5 | event_scheduler | localhost | NULL | Daemon  | 1098372 | Waiting on empty queue | NULL                  |
| 1774483 | root            | localhost | NULL | Query   |       0 | starting               | SHOW FULL PROCESSLIST |
+---------+-----------------+-----------+------+---------+---------+------------------------+-----------------------+
2 rows in set (0.00 sec)

There are no events, and I haven’t attempted to created any.

mysql> SELECT * FROM information_schema.EVENTS;

Empty set (0.00 sec)

This is actively using up to 8% of my server’s CPU.

Is there a way of determining what this is, or why it was started? Will this try to run every time I restart MySQL? If so, what is it ‘waiting’ for and do I need to tweak my configuration at all to prevent this?

MySQL 8.0.21

query performance – Optimize for a lot of subqueries on MySQL

I have an ugly query on MySQL. I can not share the whole query, because my customer has a rule for this. There are a lot of subqueries in the query. Sometimes queries stack in statistics state. Some document says, it depends on your server optimizer_search_depth config parameter. I tried 0 and 1, but nothing is changed. The queries get still timeout.

MySQL version 8.0.20 on AWS RDS.

enter image description here

Here is the EXPLAIN result.

+--+-----------+-----+----------+------+-------+----+--------+----------------------------------+
|id|select_type|table|partitions|type  |key_len|rows|filtered|Extra                             |
+--+-----------+-----+----------+------+-------+----+--------+----------------------------------+
|1 |PRIMARY    |NULL |NULL      |NULL  |NULL   |NULL|NULL    |No tables used                    |
|45|SUBQUERY   |td   |NULL      |ref   |96     |48  |100     |NULL                              |
|45|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |100     |Using where                       |
|45|SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
|43|SUBQUERY   |NULL |NULL      |NULL  |NULL   |NULL|NULL    |Impossible WHERE                  |
|44|SUBQUERY   |ti   |NULL      |ref   |78     |3   |1.67    |Using where                       |
|44|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|42|SUBQUERY   |td   |NULL      |ref   |78     |2   |100     |Using index                       |
|42|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|41|SUBQUERY   |td   |NULL      |ref   |78     |10  |100     |Using index                       |
|41|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|40|SUBQUERY   |td   |NULL      |ref   |96     |48  |100     |NULL                              |
|40|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|39|SUBQUERY   |ti   |NULL      |ref   |387    |1   |5       |Using where                       |
|38|SUBQUERY   |ti   |NULL      |ref   |111    |1   |5       |Using where                       |
|37|SUBQUERY   |ti   |NULL      |ref   |111    |1   |100     |Using where                       |
|36|SUBQUERY   |ti   |NULL      |ref   |303    |49  |100     |Using where; Using index          |
|36|SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
|35|SUBQUERY   |ti   |NULL      |ref   |78     |3   |100     |Using where; Using index          |
|35|SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
|33|SUBQUERY   |t    |NULL      |ref   |752    |2   |2.5     |Using where                       |
|32|SUBQUERY   |t    |NULL      |ref   |752    |2   |5       |Using where                       |
|31|SUBQUERY   |ti   |NULL      |ref   |753    |10  |3.77    |Using where                       |
|30|SUBQUERY   |td   |NULL      |ref   |1203   |1   |100     |NULL                              |
|30|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |100     |Using where                       |
|30|SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
|29|SUBQUERY   |ti   |NULL      |range |159    |11  |0.45    |Using index condition; Using where|
|28|SUBQUERY   |ti   |NULL      |range |159    |11  |0.45    |Using index condition; Using where|
|28|SUBQUERY   |td   |NULL      |ref   |8      |1   |100     |Using where                       |
|27|SUBQUERY   |td   |NULL      |ref   |414    |1   |100     |Using index                       |
|27|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|26|SUBQUERY   |td   |NULL      |ref   |414    |1   |100     |Using index                       |
|26|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|25|SUBQUERY   |ti   |NULL      |ref   |303    |14  |0.36    |Using where                       |
|25|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|24|SUBQUERY   |ti   |NULL      |ref   |303    |14  |0.36    |Using where                       |
|24|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|23|SUBQUERY   |td   |NULL      |ref   |189    |1   |100     |Using index                       |
|23|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|22|SUBQUERY   |td   |NULL      |ref   |189    |1   |100     |Using index                       |
|22|SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|21|SUBQUERY   |ti   |NULL      |range |84     |1   |100     |Using index condition; Using where|
|21|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|20|SUBQUERY   |ti   |NULL      |range |84     |1   |100     |Using index condition; Using where|
|20|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|19|SUBQUERY   |ti   |NULL      |ref   |753    |10  |0.5     |Using index condition; Using where|
|19|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|18|SUBQUERY   |ti   |NULL      |ref   |753    |10  |0.5     |Using index condition; Using where|
|18|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|17|SUBQUERY   |ti   |NULL      |range |462    |2   |2.5     |Using index condition; Using where|
|17|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|16|SUBQUERY   |ti   |NULL      |range |84     |1   |10      |Using index condition; Using where|
|15|SUBQUERY   |ti   |NULL      |range |912    |2   |2.5     |Using index condition; Using where|
|14|SUBQUERY   |ti   |NULL      |ref   |753    |10  |0.5     |Using index condition; Using where|
|13|SUBQUERY   |ti   |NULL      |range |159    |11  |0.45    |Using index condition; Using where|
|13|SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|12|SUBQUERY   |ti   |NULL      |range |462    |2   |2.5     |Using index condition; Using where|
|11|SUBQUERY   |ti   |NULL      |range |84     |1   |100     |Using index condition             |
|10|SUBQUERY   |ti   |NULL      |ref   |303    |14  |0.36    |Using where                       |
|9 |SUBQUERY   |td   |NULL      |ref   |96     |48  |100     |NULL                              |
|9 |SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |5       |Using where                       |
|8 |SUBQUERY   |ti   |NULL      |ref   |753    |10  |50      |Using where                       |
|8 |SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|7 |SUBQUERY   |ti   |NULL      |ref   |111    |1   |100     |Using index                       |
|7 |SUBQUERY   |td   |NULL      |ref   |8      |1   |4.85    |Using where                       |
|5 |SUBQUERY   |ti   |NULL      |ref   |387    |1   |50      |Using where                       |
|5 |SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
|4 |SUBQUERY   |ti   |NULL      |ref   |753    |10  |1.85    |Using where                       |
|3 |SUBQUERY   |td   |NULL      |ref   |78     |10  |100     |Using index                       |
|3 |SUBQUERY   |ti   |NULL      |eq_ref|8      |1   |65.05   |Using where                       |
|3 |SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
|2 |SUBQUERY   |ti   |NULL      |ref   |78     |10  |100     |Using where; Using index          |
|2 |SUBQUERY   |td   |NULL      |ref   |8      |1   |100     |Using index                       |
|2 |SUBQUERY   |c    |NULL      |ref   |110    |1   |100     |Using index                       |
+--+-----------+-----+----------+------+-------+----+--------+----------------------------------+

performance – CPU for SQL Server

We want to buy a new server for SQL server,

We have around 10 DB, and heavy jobs every 10 minutes to process millions of records,

Also, we have around 20 end users,

We want to know what is the best CPU for this server,

and what is the advantage/disadvantage of option2 vs option1 (is the same cores)

and multi – CPU vs multi-core, option 1 vs option 3 with more clock speed, but 2 smaller CPU,

Option 1

Intel Platinum 8280 , 28C/56T, 2.70 GHz/ 4.00 GHz T,

Price $20,564

https://www.dell.com/en-us/work/shop/servers-storage-and-networking/poweredge-r640-rack-server/spd/poweredge-r640/pe_r640_12232c_vi_vp?view=configurations&configurationid=ba0f44e1-7290-4300-aee3-936b7f24e1ae

Option 2

Intel Gold 6258R , 28C/56T, 2.70 GHz/ 4.00 GHz T,

Price $13,704

https://www.dell.com/en-us/work/shop/servers-storage-and-networking/poweredge-r640-rack-server/spd/poweredge-r640/pe_r640_12232c_vi_vp?view=configurations&configurationid=7ad77f56-2569-4eda-b051-0aa1624d9d45

Option 3

2 X Intel Gold 6246R, 16C/32T, a total of 32C/64T 3.40 GHz/ 4.10 GHz T,

Price $16,865

https://www.dell.com/en-us/work/shop/servers-storage-and-networking/poweredge-r640-rack-server/spd/poweredge-r640/pe_r640_12232c_vi_vp?view=configurations&configurationid=415b8bb1-c446-4836-aec4-16b1569eb915

This is the comparison of the CPU

https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=199353,199350,192478

Thanks

Michael

performance – Reddit bot in Python

I have created a Reddit bot that goes through an “x” amount of posts. For each post it pulls all the comments from that post into a list then a data frame and then it loops over a CSV look for words that match a ticker in CSV file and then finally it spits out a sorted data frame.

is there anything I could improve / more object-oriented code?

autho_dict structure

autho = {
'app_id': '',
'secret': '',
'username': '',
'password': '',
'user_agent': ""
}

The rest of the code.

from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import re
import os
import praw
import pandas as pd
import datetime as dt


class WallStreetBetsSentiment:
    def __init__(self, autho_dict, posts):
        self.__authentication = autho_dict
        self.__posts = posts
        self.__comment_list = ()
        self.__title_list = ()
        self.__ticker_list = pd.read_csv(
            os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + "\dependencies\ticker_list.csv")
        self.__sia = SentimentIntensityAnalyzer()

    @property
    # creates instance of reddit using authenticaton from app.WSBAuthentication
    def __connect(self):
        return praw.Reddit(
            client_id=self.__authentication.get("app_id"),
            client_secret=self.__authentication.get("secret"),
            username=self.__authentication.get("username"),
            password=self.__authentication.get("password"),
            user_agent=self.__authentication.get("user_agent")
        )

    @property
    # fetches data from a specified subreddit using a filter method e.g. recent, hot
    def __fetch_data(self):
        sub = self.__connect.subreddit("wallstreetbets")  # select subreddit
        new_wsb = sub.hot(limit=self.__posts)  # sorts by new and pulls the last 1000 posts of r/wsb
        return new_wsb

    @property
    # saves the comments of posts to a dataframe
    def __break_up_data(self):
        for submission in self.__fetch_data:
            self.__title_list.append(submission.title)  # creates list of post subjects, elements strings
            submission.comments.replace_more(limit=1)

            for comment in submission.comments.list():
                dictionary_data = {comment.body}
                self.__comment_list.append(dictionary_data)
        return pd.DataFrame(self.__comment_list, columns=('Comments'))

    # saves all comments to a csv document saved in 'logs'
    def debug(self):
        save_file = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + "\logs"
        return self.__break_up_data.to_excel(save_file + "\log-{}.xlsx".format(
            dt.datetime.now().strftime("T%H.%M.%S_D%Y-%m-%d")), sheet_name='Debug-Log')

    # loops though comments to find tickers in self.ticker_list
    def parser(self, enable_debug=bool):
        ticker_list = list(self.__ticker_list('Symbol').unique())
        # titlelist = list(df2('Titles').unique())
        comment_list = list(self.__break_up_data('Comments').unique())
        ticker_count_list = ()

        for ticker in ticker_list:
            count = ()
            sentiment = 0
            for comment in comment_list:
                # count = count + re.findall((r's{}s').format(ticker), str(comment))
                count = count + re.findall((' ' + ticker + ' '), str(comment))

                if len(count) > 0:
                    score = self.__sia.polarity_scores(comment)
                    sentiment = score('compound')  # adding all the compound sentiments
            if len(count) > 0:
                ticker_count_list.append((ticker, len(count), (sentiment / len(count))))

        if enable_debug is True:
            self.debug()

        else:
            pass
        # ISSUE: the re.findall function would return match on AIN if someone says PAIN
        df4 = pd.DataFrame(ticker_count_list, columns=('Ticker', 'Count', 'Sentiment'))
        df4 = df4.sort_values(by='Count', ascending=False)
        return df4

performance – Is this function “in place” and O(1) space and O(n) time?

function reverseWords(message) {
  const max = message.length;
  let n = max;
  let wordLen = 0;
  for (let i = 0; i < max; i += 1) {
    if (message(i) === " ") {
      n -= wordLen;
      wordLen = 0;
      message.splice(max - wordLen, 0, " ");
    } else {
      message.splice(n, 0, message(i));
      wordLen += 1;
      n += 1;
    }
  }
  message.splice(0, max);
}

//==========================================================================
/*Test cases using Jasmine (see https://github.com/jasmine/jasmine and https://jasmine.github.io/pages/docs_home.html):
describe("main tests, but showing just one for brevity", function () {

  it("rearranges correctly", function () {
    const message = (
      "m",
      "e",
      "s",
      "s",
      "a",
      "g",
      "e",
      " ",
      "t",
      "h",
      "e",
      " ",
      "i",
      "s",
      " ",
      "h",
      "e",
      "r",
      "e"
    );

    reverseWords(message);
    expect(message.join("")).toEqual("here is the message");
  });
});*/

const message = (
      "m",
      "e",
      "s",
      "s",
      "a",
      "g",
      "e",
      " ",
      "t",
      "h",
      "e",
      " ",
      "i",
      "s",
      " ",
      "h",
      "e",
      "r",
      "e"
    );

    reverseWords(message);
    console.log(message.join(""));

query performance – PostgreSQL extremely slow counts

I’m experiencing extremely slow count(*) speeds, inside a table with 5845897rows, counting the number of rows takes around 2 minutes (113832.950 ms).

EXPLAIN ANALYZE SELECT COUNT(*) FROM campaigns;
                                  QUERY PLAN
--------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=1295067.02..1295067.03 rows=1 width=8) (actual time=113830.691..113830.691 rows=1 loops=1)
   ->  Gather  (cost=1295066.80..1295067.01 rows=2 width=8) (actual time=113830.603..113832.899 rows=3 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate  (cost=1294066.80..1294066.81 rows=1 width=8) (actual time=113828.327..113828.328 rows=1 loops=3)
               ->  Parallel Seq Scan on campaigns  (cost=0.00..1287889.84 rows=2470784 width=0) (actual time=2.560..113139.782 rows=1948632 loops=3)
 Planning Time: 0.130 ms
 Execution Time: 113832.950 ms
(8 rows)

After vacuuming the table, I got the following results, but the query times remaining unchanged.

SELECT relname AS TableName,n_live_tup AS LiveTuples,n_dead_tup AS DeadTuples, last_autovacuum AS Autovacuum, last_autoanalyze AS Autoanalyze FROM pg_stat_user_tables;
          tablename          | livetuples | deadtuples |          autovacuum           |          autoanalyze
-----------------------------+------------+------------+-------------------------------+-------------------------------
| campaigns                  |    5848489 |      84122 | 2020-11-21 15:27:54.309192+00 | 2020-11-21 15:29:38.547147+00

I would expect that this count would be quite quick even accounting for the size of the database.

The database is AWS RDS PostgreSQL 11.8, with 16gb of ram.

Update 1

Machine class: db.m4.xlarge – vCPU: 4, ECU: 13, RAM: 16GB, Storage: General Purpose (SSD) with 440GB

Repeating the query now with the (ANALYZE, BUFFERS) and io timings enabled:

EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM campaigns;
                   QUERY PLAN
------------------------------------------------------------------------
 Finalize Aggregate  (cost=1311747.11..1311747.12 rows=1 width=8) (actual time=124550.322..124550.323 rows=1 loops=1)
   Buffers: shared hit=14 read=1279450 dirtied=449 written=167
   I/O Timings: read=364834.696 write=2.056
   ->  Gather  (cost=1311746.90..1311747.11 rows=2 width=8) (actual time=124550.218..124552.012 rows=3 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         Buffers: shared hit=14 read=1279450 dirtied=449 written=167
         I/O Timings: read=364834.696 write=2.056
         ->  Partial Aggregate  (cost=1310746.90..1310746.91 rows=1 width=8) (actual time=124546.286..124546.286 rows=1 loops=3)
               Buffers: shared hit=14 read=1279450 dirtied=449 written=167
               I/O Timings: read=364834.696 write=2.056
               ->  Parallel Seq Scan on campaigns  (cost=0.00..1304490.32 rows=2502632 width=0) (actual time=0.298..123798.537 rows=1948646 loops=3)
                     Buffers: shared hit=14 read=1279450 dirtied=449 written=167
                     I/O Timings: read=364834.696 write=2.056
 Planning Time: 8.986 ms
 Execution Time: 124552.079 ms
(16 rows)

mysql – Dynamic pivot table filtering and performance

I have this those tables and i’m trying to rotate the subscriber to column table to horzental and filter it’s result based on multiple AND/OR conditions like following

    WHERE  first_name LIKE 'm%' AND email LIKE '%com'

This is the fiddle
http://sqlfiddle.com/#!9/a7211d/1

Those are my 2 tables

Fields Table
+----+------------+
| id |label       |
+----+------------+
|  1 | email      |
|  2 | first_name |
|  3 | last_name  |
+-----------------+

Subscribers Fields Table
+----+--------------+----------+---------------+-------------------+
| id | mail_list_id | field_id | subscriber_id | value             |
+----+--------------+----------+---------------+-------------------+
|  1 |            1 |        1 |             1 | mark@examble.com  |
|  2 |            1 |        2 |             1 | Mark              |
|  3 |            1 |        3 |             1 | Wood              |
|  4 |            1 |        1 |             2 | luan@domain.com   |
|  3 |            1 |        2 |             2 | Luan              |
|  4 |            1 |        3 |             2 | Charles           |
|  5 |            1 |        1 |             3 | marry@domain.com  |
|  6 |            1 |        2 |             3 | Anna              |
|  7 |            1 |        3 |             3 | Marry             |
|  8 |            2 |        1 |             4 | kevin@domain.com  |
|  9 |            2 |        2 |             4 | Kevin             |
| 10 |            2 |        3 |             4 | Faustino          |
| 11 |            2 |        1 |             5 | frank@examble.com |
| 12 |            2 |        2 |             5 | Frank             |
| 13 |            2 |        3 |             5 | Denis             |
| 14 |            2 |        1 |             6 | max@example.com   |
| 15 |            2 |        2 |             6 | Max               |
| 16 |            2 |        3 |             6 | Ryan              |
+----+--------------+----------+---------------+-------------------+

This is what i tried but that caused to issues that the email and first_name return 0 instead of value and also it dont work with AND condition operator

select 
  subscriber_id,
  MAX(case when field_id = '1' then value else 0 end) as email,
  MAX(case when field_id = '2' then value else 0 end) as first_name,
  MAX(case when field_id = '3' then value else 0 end) as last_name
from test_fields_table
WHERE (field_id = 3 AND value LIKE 'm%') OR (field_id = 1 AND value = '%com')
group by subscriber_id limit 100;

However if i removed the Where condition the query works fine with good performance

I also tried to add my query in a subquery give it an alias and then search that generated virtual table using the alias field name instead of the field id but in this case i will have to remove the limit parameter from the subquery in order to be able to search for the full table not just in the first 100 records which cause a very bad performance since this table will be too large 100-500 milion record and i need to get the query result in under 4 seconds.

C++ performance: Linear regression in other way

Here is the code that can be used for calculation of mathematical function, like ax^2 + bx + c.

It is fast enough if you choose small length, otherwise if programmer don’t know the small range, that code can be really slow. I’ve made it specially on C++ to be more fast.

#include <iostream>
#include <vector>

using namespace std;

template<class var>
var Module(var x){
    if (x >= 0)
        return x;
    else
        return x*-1;
}

class Linear {
public:
    float resA, resB, resC;
    float err;

    float Predict(float a, float b, float c, float x) {
        return ((a * (x*x)) + (b*x) + c);
    }

    float Predict(float x) {
        return ((resA * (x * x)) + (resB * x) + resC);
    }

    float ErrorAv(float a, float b, float c, vector<float> input, vector<float> output) {
        float error = Module(Predict(a, b, c, input(0)) - output(0));
        for (int i = 1; i < input.size(); i++)
            error = (Module(Predict(a, b, c, input(i)) - output(i)) + error)/2;
        return error;
    }

    void LinearRegr(vector<float> input, vector<float> output, float maximum, float minimum = 0, float step = 1) {
        if (step == 0)
            step++;
        float a, b, c;

        float lastError = INFINITY;
        for (a = minimum; a <= maximum; a += step)
            for (b = minimum; b <= maximum; b += step)
                for (c = minimum; c <= maximum; c += step) {
                    float error = ErrorAv(a, b, c, input, output);
                    if (error < lastError) {
                        lastError = error;
                        resA = a;
                        resB = b;
                        resC = c;
                        err = lastError;

                        if (!lastError)
                            return;
                    }
                }
            
        
    }
};

#include <ctime>
int main(){
    vector<float> input; //Input example.
    vector<float> output; //Output example.

    float a = 10.5, b = -7, c = 5.5; //Variables as search values.

    //Fill dataset:
    for (int i = 0; i < 100; i++) {
        input.push_back(i);
        output.push_back((a * (i * i)) + (b * i) + c);
    }

    clock_t begin = clock(); //Start clock while searching a, b, c values
    Linear linear; 
    linear.LinearRegr(input, output, 15, -10, 0.5); //Start searching.
    cout << "Time: ~" << double(clock() - begin) / CLOCKS_PER_SEC << " seconds." << endl;

    cout << linear.resA << "*x^2 + " << linear.resB << "x + " << linear.resC << endl; //Enter results.
}

As can see, Linear.LinearRegr function can get from 3 to 5 parameters.
I dont need make the code super prettier(for me its already pretty), just want to work it faster.

How to optimize and make it faster?

entities – Do fields that are not displayed affect rendering performance?

This is a general question about how Drupal renders content. Say I have a content type with 20 fields, including some entity references, text fields, integers, and so on.

When I have a lot of fields on a content type, I know that each field slows down operations like saving the content type because each field has to be processed.

However, what about when Drupal renders the content type to display it? If I hide 19 of the 20 fields on “Manage Display” for the content type, will Drupal avoid processing all those fields and have the same performance as rendering a content type that only has 1 field?