Is there a better approach for this table from row to column?

Enter image description here
I pass a function into the variable ($ month_receivable) to get the value of each month and get the result of a function .. but the process is stupidly slow. These were the only January data, but you can add a monthly range example from January 2016 to December 2019.

When I try to remove that month_receivable variable, it was smooth.

This is my code from the server side to the view (client side):

regulator

public function notes_receivable_summary ($ start_date, $ end_date) {
$ loans_list = $ this-> db-> query ("SELECT
borr_name,
Mitausleiher,
published date,
due_from,
due to,
pn,
no_months,
loan_ref,
loan_id,
pn
FROM v_borrowers_nr dd
WO (df between & # 39; $ start_date & # 39; AND & # 39; $ end_date & # 39;)
ORDER BY loan_ref ") -> result ();
$ months = $ this-> db-> query ("SELECT dd FROM v_months_nr WHERE dd between & # 39; $ start_date & # 39; and & # 39; $ end_date & # 39;") -> result ();

$ data['monthly_receivable'] = function ($ date, $ loan_ref, $ loan_id) {
$ enc_url = explode (& # 39; & # 39 ;, this-> main_model-> encdec ($ this-> uri-> segment (2), & # 39; d & # 39;));
$ s_date = $ enc_url[1];
$ e_date = $ enc_url[2];
$ sd = date (# Y-m-d,, strtotime (--1 month #, strtotime ($ s_date));
$ ed = $ e_date;

$ q = $ this-> db-> query ("SELECT * FROM f_monthly_rcvble (& # 39; $ loan_ref & # 39 ;, $ loan_id, & # 39; $ start_date & # 39 ;, & # 39; end_date & # 39; , & # 39; $ date & # 39;) ") -> row ();
return $ q;
};
$ this-> load-> view (& # 39; pages / ajax / reports / sample_nr & # 39 ;, $ data);
}

view

"Jan", 2 => "Feb", 3 => "Mar", 4 => "Apr", 5 => "May", 6 => "Jun", 7 => "Jul", 8 => " Aug,, 9 => Sep Sep,, 10 => Okt Oct,, 11 => Nov Nov,, 12 => # & # 39 ;, 39; Dec);?>
  dd)). & # 39; - & # 39 ;. Date (& # 39; M Y & # 39 ;, Strtotime ($ months)[count($months) - 1]-> dd)): zero; ?>
  dd); ?>
<th class = "table header text center font strong Colspan = 10">
              
              
              
              <th class = "Table header font-strong ">
Current destination
<th class = "table header font-strong amt_pd ">
Actual collection
<th class = "Table header font-strong ">
UA / SP
<th class = "Table header font-strong ">
Overdue Target UA / SP
<th class = "Table header font-strong ">
Current Collection UA ​​/ SP
<th class = "Table header font-strong ">
Overdue Balance UA / SP
<th class = "Table header font-strong ">
Advance payment
<th class = "Table header font-strong ">
OB closed
<th class = "Table header font-strong ">
Early full payments
<th class = "Table header font-strong ">
adjustments
borr_name); ?> <? php if (count ($ name) < 3): ?> loan_ref; ?> loan_id; ?> dd, $ lref, $ lid); ?>
loan_ref; ?>
Mitausleiher); ?>
published date)); ?> due_from)); ?> due to)); ?> pn, 2); ?> no_months, 0); ?> amount_due, 2); ?> actual_collection, 2); ?> col_ua_sp, 2); ?> past_due_target_ua_sp, 2); // Overdue destination UA ​​/ SP?> past_due_collection_tot_ua_sp, 2); ?> past_due_balance, 2); // overdue target balance?> advanced_payment, 2); ?> ob_closed, 2); ?> prepaid_payments, 2); ?> Adjustments, 2); ?>
Ref. No. Surname Mitkreditnehmer Release date From To Terms of Use

Enter image description here

PHP – How do I create a table with inputs to Laravel and Ajax?

I create a table that is created by clicking a button. The value of an input is retrieved per query and sent to the controller with Ajax

controller

public function table (Request $ request) {

if ($ request-> ajax ()) {
$ order = $ request-> get (& # 39; order & # 39;);

$ payments = DB :: table (worksheet as ht)
-> join (& # 39; service_reparation as sr & # 39 ;, & # 39; ht.service_reparation & # 39 ;, & # 39; = & # 39; s & # 39; sclave & # 39;
-> join ('catalogo_servicio as cs', 'sr.clave', '# 39', 'sr.servicio', & # 39 ;, & # 39;
-> select (& # 39; ht.no_order & # 39 ;, & # 39; cs.service_key & # 39 ;, & # 39; c.price & # 39 ;, & # 39; c.type & # 39;
-> where (& # 39; no_order & # 39 ;, & # 39; = & # 39;, $ order)
-> get ();
dd ($ payments);
foreach ($ payments as $ pa)
{
$ output. = & # 39;
    
     & # 39 ;. $ pa-> no_order. & # 39;
       
    
    
    
    & # 39 ;;
}

$ data = array (
& # 39; # 39 & table_data; => $ output
);

echo json_encode ($ data);

}

jquery and ajax

         

In my blade I only have one table with the respective th, my route is a post
Route :: post ('note.create', 'NotePagoController @ table') -> name ('notePagocontroller.add');

I do not know what's wrong, I'm not wrong with the query, thanks

How can I add student markers and table appearance with PHP?

Each element is successfully inserted into the results table, but the appearance of the results table in phpmyadmin is very different from what I expected when I added the student_code, subject_code, and subject_name marks
I have six subjects, which I expected is that every student should have all his grades in just one column. What I get now is that

    student_code subject_code subject_name marks
01 01 Kiswahili 50
01 02 German 50
01 03 Hisabati 40
01 04 Huji 44
01 05 Stadi Za Kazi 36
01 06 Sayansi 42

but i want to get that in phpmyadmin

You can see here that each student has all his notes in one column, unlike above

STUDENT_CODE KISWAHILI ENGLISH HISABATI HUJI STADI ZA KAZI SAYANSI
1 50 50 40 44 36 42
2 48 46 36 46 44 42
3 48 40 44 42 45 38
4 50 50 36 44 40 38
5 42 50 40 42 38 46

Here is my result table structure

                1 student_code varchar (250)
2 subject_code varchar (250)
3 subject_name varchar (250)
4 points int (10) 

Here's my subject table with the subject name and subject code already inserted

subject_code subject_name
1 Kiswahili
2 English
3 Hisabati
4 Huji
5 Stadi Za Kazi
6 Sayansi

Is there any way I can add grades for all six topics at a time, instead of adding and submitting grades for one topic and then taking grades for another and then adding them until all grades for six topics are present? Complete and then take another student and then do the same. I ask this, since you have 400 students and each student occupies all six subjects, it will be possible to use a loop.

Google Apps script – Copy and paste specific data from a document into a specific cell in a table

I use Google Apps Script. I have several Google documents with information associated with specific categories. I'm trying to create a Google spreadsheet that contains all the information from the different categories, if that makes sense. I want to automate the process of copying and pasting the right information under the heading into the correct cell of the table. I have very little background in Apps Script (I attended Ben Collins Blastoff class) and I'm not sure where to start. I think I would find out how to set it up so that all data is copied and pasted into the document after a specific header until another defined header is detected. I know how to get the right sheet and the document, and I can get the script to log all the information from the Google Doc, but I do not know how to select particular blocks with the script.

PostgreSQL triggers to track table changes

I'm trying to create a trigger (Postgres 9.6) to track changes to a table. That's my approach:

CREATE OR REPLACE FUNCTION taxon_history () RETURNS triggers AS
$ BODY $
BEGIN
IF TG_OP = & # 39; DELETE & # 39; THEN
INSERT INTO history.taxon (operacao, "data", tecnico, original_oid, taxon)
VALUES (& # 39; DELETE & # 39 ;, current_timestamp, current_user, old.oid, old.taxon);
RETURN old;

ELSIF TG_OP = & # 39; UPDATE & # 39; THEN
INSERT INTO history.taxon (operacao, "data", tecnico, original_oid, taxon)
VALUES (& # 39; DELETE & # 39 ;, current_timestamp, current_user, old.oid, old.taxon);
RETURN old;

ELSIF TG_OP = & # 39; INSERT & # 39; THEN
INSERT INTO history.taxon (operacao, "data", tecnico, original_oid, taxon)
VALUES (& # 39; INSERT & # 39 ;, current_timestamp, current_user, new.oid, new.taxon);
RETURN old;
END IF;
THE END;
$ BODY $
LANGUAGE plpgsql;

CREATE TRIGGER history_taxon
AFTER INSERTING OR UPDATING OR DELETING Taxon
FOR EACH ORDER PROCEDURE taxon_history ();

If, however, something in the taxon No record is added in the table taxon_history Table. I also get no error message and I'm in the dark, why nothing happens. What am I doing wrong?

Can I use the NRPT table on Windows Server 2012 R2 to override the default DNS server settings for specific domains?

I joined Windows Server 2012 R2 and need to point to a non-default DNS server to resolve names under a specific internal domain that AD can not resolve. Is it possible to use NRPT (Name Resolution Policy Table) in Local Computer Policy for this? I tried to add a generic DNS server rule because it did not seem to work.

Postgresql – Postgres – Vacuum does not stop at large / busy table

We have a neatly active PG database hosted on AWS. Recently we received notifications like the following:

    The age of the transaction ID reached 750 million. Auto vacuum parameter values ​​for [autovacuum_vacuum_cost_limit, autovacuum_vacuum_cost_delay, autovacuum_naptime] are updated to make the automatic vacuum more aggressive.

I also noticed that disk usage for this particular table was rapidly increasing. Here is the space used:

[
  {
    "oid": "16413",
    "table_schema": "public",
    "table_name": "connections",
    "row_estimate": 1.01476e+07,
    "total_bytes": 518641270784,
    "index_bytes": 478458511360,
    "toast_bytes": 30646272,
    "table_bytes": 40152113152,
    "total": "483 GB",
    "index": "446 GB",
    "toast": "29 MB",
    "table": "37 GB"
  }
]

Then we did an analysis for something else and found a long-lasting vacuum process (5 days ago):

[
  {
    "pid": 14747,
    "duration": "14:11:41.259451",
    "query": "autovacuum: VACUUM ANALYZE public.connections (to prevent wraparound)",
    "state": "active"
  }
]

(This was actually a new one, but looked just like the last one that was never finished).

For confirmation, I see that the links The table has not been vacuumed automatically since the 15th and there are a lot of things to clean up:

[
  {
    "relid": "16413",
    "schemaname": "public",
    "relname": "connections",
    "seq_scan": 19951154,
    "seq_tup_read": 226032655046,
    "idx_scan": 41705151351,
    "idx_tup_fetch": 375484186787,
    "n_tup_ins": 8029742,
    "n_tup_upd": 13217694302,
    "n_tup_del": 542670,
    "n_tup_hot_upd": 96750657,
    "n_live_tup": 10237553,
    "n_dead_tup": 887751401,
    "n_mod_since_analyze": 350036721,
    "last_vacuum": null,
    "last_autovacuum": "2019-06-15 17:05:51.526792+00",
    "last_analyze": null,
    "last_autoanalyze": "2019-06-15 17:06:27.310486+00",
    "vacuum_count": 0,
    "autovacuum_count": 4190,
    "analyze_count": 0,
    "autoanalyze_count": 4165
  }
]

I've read a lot about configuring autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor different for very active tables. And that's all good, but it does not look like it's going to get through when it's running.

I also read about optimizations autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay make it more aggressive in the work it needs to do.

I've tried to change some of them for the table, but it's only there when I try to write values ​​for that particular table.

What is the best way to vacuum the table?

Also, would restarting the database affect all of this?

SQL Server Index Searches for a complete table, depending on the parameter value

I have a question:

SELECT Id,
ColumnA,
ColumnB
FROM MyTable
WHERE ColumnA = @varA OR
ColumnB = @varB  

The table is defined as

CREATE TABLE myTable
(
ID INT IDENTITY (-2147483648,1) PRIMARY KEY,
Column A VARCHAR (22)
Column B VARCAHR (22)
)

and the table contains a non-clustered index

CREATE INDEX IX_MyIndex ON MyTable
(
ColumnA
)

When I run the query with the following parameters:

DECLARE @varA nvarchar (4000) = & # 39; & # 39;
DECLARE @varB nvarchar (8) = & # 39; 10140730 & # 39;

The execution plan displays an index search IX_MyIndexHowever, the number of lines read is displayed as 17 million lines, the actual number of lines as 0. (In MyTable.ColumnA, there are 0 lines with the value & # 39; & # 39 ;.)
When I turn around SET STATISTICS IO ON I can see that the whole table is read

This is useful, as described in the "Here's a" "Index Search" section of this article.

However, if I execute the same query with the parameters:

DECLARE @varA nvarchar (8) = & # 39; a & # 39;
DECLARE @varB nvarchar (8) = & # 39; 10140730 & # 39;

The search operator has no property "Number of read lines" (there are 0 lines MyTable.ColumnA with the value "a") and SET STATISTICS IO reports logical reads with a single number

Incidentally, the plan has an implicit conversion warning and the problem disappears if I change the query as follows:

SELECT Id,
ColumnA,
ColumnB
FROM MyTable
WO column A = CONVERT (VARCHAR (22), @ varA) OR
Column B = CONVERT (VARCHAR (22), @ varB)

Or change the underlying column in NVARCHAR

I'm curious, however, why the behavior of the index with the two different values ​​looking for @varA is different even though both return the same number of records in the table (0)