index – SQL Server indexes and deleting records

I have inherited a large production database in which one of the tables contains 72 million lines and is inserted / updated thousands of times a day. These series have built up over 3 years and are currently growing every 4 weeks by about 2 million.

There are 7 indices (3 heavily used and least fragmented / 4 mostly unused, but highly fragmented at around 75-85% and the columns are heavily updated). There was one SP that was supposed to archive record stubs in another table, but we had to disable this because it bothered the backups and caused failover and failures on another system.

Ideally we only want data worth 1 year in this table (about 25-30 m lines) and I want to disable the 4 unused indexes. I read that they should be restored before I disable them, but I am not sure if I should do this before or after archiving the data for the last 2 years.

A redesign has been planned for a few years, but I am now a team of 1 with limited knowledge of indexing / SQL tuning, etc. and trying to do my best with very limited resources.

The table is heavily used between 7am and 1am, so most of the work needs to be done on-site or early in the morning to limit downtime for users.

Suggestions would be welcome. Thank you very much.

App Windows – QILING Disk Master Professional / Technician / Server 4.7.6 Build 20191114 Multilingual | NulledTeam UnderGround

File size: 25,85 / 25,85 / 25,85 MB

The biggest nightmare for a computer user is data loss and system crash. Once this is done, reliable and up-to-date backups are extremely necessary and important.

QILING Disk Master covers all needs to recover lost data and restore crashed systems in minutes. It is an advanced and reliable data protection and system disaster recovery software for home and business desktops and laptops. It allows users to perform self-service backups with full full / differential / incremental backups.
Virtual disk software and backup software and disk health software verification. With it you can do the following:
– The Virtual Hard Disk Utility simulates a real hard disk to prevent data loss and enable a more convenient software test environment.
– With the Ramdisk function users can increase the speed of the PC.
– With full system protection, you can easily back up and restore your entire operating system in an emergency.
– Migrate the system quickly, easily and securely to an SSD or larger hard drive to replace or update the hard drive.
– System backup and protection (imaging)
– File backup and restore
– zip file
– File synchronization (server supported)
– Full, incremental and differential backup
– AES 256-bit encryption, compression and password
– One-click system backup
– Daily, weekly or monthly backup scheduler
– Perfect defragmentation
– Restoration of the bare-metal system
– Backup Strategy (Quota Management)
– Supports HDDs and SSDs of all sizes (80 GB to 4 TB)
– compression
– Deduplication
– E-Mail notification
– Hot clone
– Greater than 512-byte sector
– GPT & UEFI boot supported
– Hard disk / partition management
– Migrate the operating system to SSD / HD
– Delete data
– Paper shredder
The official site does not provide information about changes in this release

DOWNLOAD
Nitroflare

SQL Server – Why do we have Top N sorting in this example?

The following example shows the result of Index Spool is already sorted, why do we have Top N Sort here instead of easy Top?

use tempdb;
go

create table dbo.t1 (id int identity primary key, v int);
create table dbo.t2 (id int identity primary key, v int);

insert into dbo.t1
(v)
select top (1000)
row_number() over (order by 1/0)
from
master.dbo.spt_values a cross join
master.dbo.spt_values b;

insert into dbo.t2
(v)
select top (10000)
row_number() over (order by 1/0) + 10000
from
master.dbo.spt_values a cross join
master.dbo.spt_values b;

set statistics xml, io on;

select
sum(a.v + b.v)
from
dbo.t1 a outer apply
(select top (1) v from dbo.t2 where v >= a.v order by v) b;

set statistics xml,io off;
go

drop table dbo.t1, dbo.t2;
go

Enter image description here

dmz – Separate server between web server and database server

Possibly basic question of a developer, not a security expert.

AFAIK, a very common web site configuration, is that at least one web server is behind a firewall and load balancing, with web servers sending requests to a database server over an internal network (not accessible from outside).

A security adviser has now told me that this is not certain. When a hacker gains access to a web server, he may in some way send requests to the database server.

They recommend an additional server between the Web servers and the database server where the Web servers send requests to the additional server, which are then forwarded to the database server. The database server accepts requests only from the additional server, not from the web servers.

I'm a bit confused as to why this extra server is needed to make the system secure. Since the web servers need to access the database to process HTTP requests, it makes a difference whether the additional server is being used.

SQL Server – How do I avoid tons of lower (column) in the WHERE clause for multiple LIKE comparisons?

I need to filter data in a string column and therefore compare the column with NOT LIKE to a series of strings. I use SQL Server. Now my code looks like this:

SELECT history.date,
       history.new_value,
       res.resource_name
FROM   item_history history, 
       owned_resource   res
WHERE  (history.attr_name = 'resource_contact' OR history.attr_name = 'Kontakt-UUID')
       AND res.inactive = 0 
       AND history.com_par_id = res.own_resource_uuid
       AND lower(history.new_value) NOT LIKE '%tp für%'
       AND lower(history.new_value) NOT LIKE '%rücklieferung%'
       AND lower(history.new_value) NOT LIKE '%rückläufer%'
       AND lower(history.new_value) NOT LIKE '%stoerreserve%'
       AND lower(history.new_value) NOT LIKE '%zentrallager%'
       AND lower(history.new_value) NOT LIKE '%bhs-pool%'
       AND lower(history.new_value) NOT LIKE '%lager halle%'        
       AND lower(history.new_value) NOT LIKE '%lager logistik%'
       AND lower(history.new_value) NOT LIKE 'reserve %'
       AND lower(history.new_value) NOT LIKE '%igeko%bhs%'
       AND lower(history.new_value) NOT LIKE '%service%ecg%'
       AND lower(history.new_value) NOT LIKE '%multifunktionsdrucker%'
       AND lower(history.new_value) NOT LIKE 'nn%gisa%raum%'
       AND lower(history.new_value) NOT LIKE '%citrix%admins%'
       AND lower(history.new_value) NOT LIKE '%personalwesen%'
       AND lower(history.new_value) NOT LIKE '%etagendrucker%'
       AND lower(history.new_value) NOT LIKE '%schulungsraum%'
       AND lower(history.new_value) NOT LIKE '%team%raum%'
       AND lower(history.new_value) NOT LIKE  '%beratungsraum%'
       AND lower(history.new_value) != 'reserve'
)

I think the performance is not the best, which keeps calling "lower ()". As a programmer my nails roll up and see so much redundant code.
Unfortunately, I have not found a good way to use a variable or similar.
(I'd like to add that I can NOT just add a new computed column, which is a good option since I am only authorized to read data.)

Does anyone have a good idea to make the code smarter? Thank you in advance!

Jana

VirtualBox LAMP server to Proxmox leads to the disappearance of Webroot

Hi Guys! First time here.
Hopefully I'm in the right place, move this thread, if not.

When I try to transfer my fully-functional website hosted on my Virtualbox VM LAMP server to a Proxmox installation, some problems occur.
Let me explain.
I have my first website for about two months. I run it on an Ubuntu Server 18.04 LAMP stack in a VirtualBox VM on Windows 10. I have my own GoDaddy signed certificates, so my site has https. I wanted to transfer this to My Proxmox and install it on another PC.
I've copied my .VDI file to my Proxmox computer via WinSCP, everything is working fine, booting properly, but the problem is that, in particular, my webroot directory is VANISHES. Everything in / var / www / website / has disappeared, and even stranger is when I check permissions on the "website" directory instead of being owned by www-data that it is now owned by root but is empty. My WordPress installation disappears.

I was just trying to upload my webroot directory directly to the proxmox VM, but even if I change permissions, the directory listed in default-ssl.conf will not be used, and eventually the default index.html will be used.

What I just thought, what led me to the apache2 forum, is that when I create my key file to generate my certificates, they are bound to THIS MACHINE? Would cloning and using on another PC damage the default ssl.conf?

I have already posted in the Proxmox forum at the following address:
https://forum.proxmox.com/threads/m…ss-installation-to-proxmox.60138/#post-277364
SEMrush

but I think that this can be a CERT problem with Apache2. I really hoped someone knew.

SQL Server – alternative to JOIN with BETWEEN

I have to join two tables based on the BETWEEN condition.

Table 1 is small is small by 1500 records and Table 2 is of 40 million records. Table1 has only one column with the data type bigint and Table2 with 8 columns. I have to connect between these two tables under the condition BETWEEN.

I've been trying to follow, but it's getting slow for just 1 recording Table1 and 40 million in Table2,

Query:

SELECT t1.cola AS (InputValue),t2.cola,t2.colb,t2.colc,t2.cold,t2.code
FROM table2 t2 
INNER JOIN table1 t1 ON t1.cola BETWEEN t2.cola AND t2.colb ;

Indexing:

  1. CREATE NONCLUSTERED INDEX NCIX_Table1_Cola ON table1(cola)
  2. CREATE NONCLUSTERED INDEX NCIX_Table2_Col_a_b ON table2(cola,colb)

The above query took 30 seconds for only 1 record in table1 and 40 million in table2, As I said, I will get more than 1500 plates table1 slows down. Do you need an alternative between or proper indexing?

To edit: Added sample data.

Table 1:

cola
---------------
12
145
34
90
88990
987611
55
...
..
......1500 rows

Table 2:

cola    colb    colc    cold    cole
-------------------------------------
0       10      c1      d1      e1
10      20      c2      d2      e2
21      40      c3      d3      e3
41      60      c4      d4      e4
61      100     c5      d5      e5
101     1000    c6      d6      e6
1001    10000   c7      d7      e7
10001   200000  c8      d8      e8
...... 
......40 millions records

Expected result:

InputValue  cola    colb    colc    cold    cole
--------------------------------------------------
12          10      20      c2      d2      e2
145         101     1000    c6      d6      e6
34          21      40      c3      d3      e3
.....

ffmpeg – Saving large videos on the server

In my application, the client uploads relatively large videos, the average is 500 MB. I need to encode the video in the backend to reduce the size, and I try to use ffmpeg for it. However, ffmpeg takes about 10 minutes for each video, and I should be able to deliver the video to the client immediately or at least a few seconds after the upload is complete.

What would be the best course of action in this context?

Windows Server 2016 – How can I create a primary zone that is NOT integrated with Active Directory?

Using Powershell on Windows Server Core I'm managing the DNS server through a clean install.

After you add a primary zone using a backing zone file,

Add-DnsServerPrimaryZone -ZoneName "example.com" -ZoneFile "example.com.dns" -verbose -passthru;

It outputs the results as "AD integrated" as shown in the following message:

VERBOSE: Adding DNS primary (AD integrated/file-backed forward/reverse lookup) zone example.com on DNS1 server.
VERBOSE: AllowUpdate successfully set on server DNS1.

ZoneName                            ZoneType        IsAutoCreated   IsDsIntegrated  IsReverseLookupZone  IsSigned
--------                            --------        -------------   --------------  -------------------  --------
example.com                         Primary         False           False           False                False

The backing zone file is indeed created at c:windowssystem32dnsexample.com.dns

So, this message tells me that it's already (or is possible) integrated into AD, and if it's already integrated, how can I create the zone without AD integration?

When I google for an answer, I keep finding articles about Active Directory Integrated DNS Zone. However, I can not find information on how to manage the zone outside of AD or disconnect it from AD.

My impression is that -ZoneFile param should keep the zone away from AD and create the DNS file on disk. However, I am not sure how to confirm that the zone is actually not in AD.

tls – The customer wants us to use his SSL certificate on our server

No sys admin here. So please forgive ignorance.
I read the suggested Q / A here:

My question is:
How do I get an SSL certificate from my client (even if it's created to work with multiple names) and apply it to my server?

Here is my situation.
We have an AWS instance / database that serves a site that we created for a client.
The site is located in: subdomain.ofourdomain.com, We have our own SSL.
The customer has created a cname that points to our site / domain name:

Subdomain.fromcustomdomain.com -> subdomain.fromcustomdomain.com

The browser warns with the cname URL.

The customer asked if he could send me his certificate.

I do not know how that would be done. I have never had a certificate sent to me.

Should I return to the approved solution in the linked question / solution and ask them to create a certificate with our two domains? If so, how does he use it? Since it is my server?
Thank you very much
James