Remote connection – Windows Server 2012 R2 server logs me out at random

I tried many solutions, but nothing worked, so I post that question.
Windows Server 2012 R2 server randomly logs me off with Event ID 7002:
"User Experience Improvement Program User Logout Notification" After I Disconnect a Session Using a Remote Desktop Connection I have already tried disconnecting an RDP session (with Windows Server 2012 R2) as suggested in this link, without the user being logged off An attempt was made to change the lock screen and screen saver settings to keep the user logged in. Unfortunately, they did not work either, is there any way to keep the user logged in with this Event ID 7002?

Reseller provider does not provide logs

Recently, our IP was reset to zero by a well-known reseller hosting company. The company did not provide any data on it and I was completely charged. Is that fair? Need your reviews.

Please check the help desk conversation:

Me: server not available. Problem loading page, please help.
Dealer company: Please explain. All server services are running normally …

Me: No, please check the websites on the server
Reseller Company: Your dedicated IP address was not routed due to a massive DDoS attack on various domain names hosted on the IP.
We have removed the null route and the websites should be online in the next 10-15 minutes.

Me: I did not receive any information about this. Please help. Let me know what to do.
Reseller companies: Using Cloudflare for all your domain names is a good place to start, as the actual server IP is hidden in this way.

Me: Immediate relocation of cloudfare to all domains is not possible. Also domains are not registered with me. Customers scream, I find in the last 4 months no complaints on these server domains. Please help.
Reseller Company: How exactly do you help? You notice that your domains are attacked from the outside, right? This is our company policy. We will not forward the IP address until the attack is over. Otherwise, the entire server, including your domains, will fail.
Of course, if the attack stops, we will remove the blockade.

Me: Is it a ddos ​​attack on your network? Can you please tell me the cause of the problem? When will it be solved? Almost 5 hours down. I can not handle calls!
Reseller Company: I did not say that … Do not play shy with me … I told you ONLY YOUR IP IS TAKEN !!! The blockade has been eliminated. When the attack restarts, your IP address will be reset to zero.

Me: What's that answer? How do I know that my ip is being attacked and what can I do without clues? Websites are inactive. I try to help my customers.
Reseller Company: How do I know it's your IP? Do you have a laugh? We have server logs that show that all DDoS attacks attack ONLY your IP address. This is one of the reasons why we provide you with a dedicated IP address so that we know who is attacking whom in such cases.
To help your customers, you need a CDN. That's why I suggested Cloudflare!

This is a response to someone trying to blame our network for the fact that your client's sites are being attacked! And that is the ONLY answer you will receive!

Me: Please specify the protocol.
Reseller Company: The functions of WHM have been activated. If you have further questions, please do not hesitate to contact us.

Me: All my clients' websites are still inactive. I am dissatisfied with the kind of help. Please help.
Reseller Company: I'm not happy that some of your customers are compromising the stability and security of the entire server, which also has about 1500 accounts! I can guarantee that your IP address will be zeroed the next time without removal. The attack seems to have stopped. The IP has been restored and the websites including are now operational.

Me: You have not yet deployed logs. How should I act? Can you understand ? At least tell me what I have to do!
Reseller Company: I've told you at least twice that you need to use Cloudflare for your domain name so that does not happen! I can only say it that way if you do not understand it. I'm so sorry. We will not provide you with logs because we deleted them because the attack has inflated the Apache protocol in gigabytes …

Email provider that first allows SMTP without POP3 checking, but logs in when sending via SMTP


I am looking for an e-mail provider for my brother who has bought a franchise and who has a website and an e-mail address (both can not be controlled by him).
So he gets the email address and password hosted at Daily until TSOHost took over.
But now TSOHost has problems sending to the Hotmail / Outlook email address.

So another domain sends a response to the email address it received with the franchise set in Thunderbird.

But I'm not sure who allows SMTP first without POP3 check, but when sending SMTP login?
Could the paid Gmail be used for something like that?

Merge-SPLogFile returns no records, but entries are in SharePoint logs

When I run the following Merge SPLog file, a warning is issued

WARNING: The cmdlet did not return any records in the log file. Check yours
Time range or filter.

Merge-SPLogFile -Correlation 2816f89e-8451-7054-1584-ad125aa03b3 -Path D:Log.txt

But when I checked the SharPoint logs, it got messages for the same correlation. I have exported the logs with the ULS viewer, but in my case there are several SharePoint WEB and APP servers

A few months ago, the notes returned to me, but now it did not work anymore.

logging – Nginx logs the complete Proxy_Pass for logging

Given the following location with proxy_pass, I want to log the fully qualified URI.

Place ~ * ^ /? (foo /?.*$) {

For example, I would like to https: // Page = 5
Forward to
which I believed to work; I think something else later in the chain deletes the query_string.

So I want to dump that into the log file to which it was submitted. I tried
$ uri, $ proxy_host and $ upstream_addr, but I could not find anything that the fully qualified uri returns.

Instead, I get:

uri_path: / foo
uri_query: page = 5

I'm really new to rewriting rules, and I'm from Microsoft. That's why I've read the extensive documents for nginx and browsing the web, but I can not find an answer to that. Any help would be appreciated. Many thanks.

Addendum: This is the only rule of 7 where I run into problems.

All transfers available logs / track1 & 2 dumps / ccv / ccv full info! Base.

  1. shreyansh5

    New member

    Likes received:
    Trophy Points:

    Welcome to Legit Skimmed / track1 & 2 dumps / ccv / ccv! Base…..
    95% of landfills are approved, point-of-dumps sell only landfills. Only fresh and recently skimmed landfills will be sold.
    Garbage dumps. Random quantity price
    Update Finish 101, 201, NY, PA, MI, CA, TX, MS, AZ, FL.
    classic / standard / discover /
    Gold / platinum
    Bus / corpus / shield
    amex Small_Corporate / Corporate / Centurion
    … :::: UK :::: ….
    Code 101 and 201
    Electron / Classic / Standard
    gold / plat / bus / corp / sign
    … :::: EU :::: …
    Code 101
    Electron / Classic / Standard
    gold / plat / bus / corp / sign
    … :::: Ca :::: …
    Code 101 and 201:
    classic / standard / discover /
    gold / plat / bus / corp / sign
    Got evidence, no time water
    Bank records
    My mail:
    TELEGRAM: @Pro_maiden
    ICQ: 421043
    DISCORD: – • ™ _prince_ ™ • – # 5482
    Shipping items = free delivery and pickup in store

  2. Sophielily

    New member

    July 22, 2019
    Likes received:
    Trophy Points:

seo – Huge differences between Webmaster Tools crawl statistics and Apache logs

We noticed a big difference in the metrics. For example, last Saturday:

  • According to Webmaster Tools:

    • 6,082 pages were crawled
    • 76.119 kilobytes downloaded
    • 1,550 ms on average to download a page
  • According to our Apache logs, Googlebot is now using:

    • 1,444 crawled pages (2,997 including all of our subdomains: blog, images, tracking, …)
    • On average, 201 ms (we use "% D" of the Apache logs to monitor this, which is the time to the last byte.)

We do not have the same time zone, so the results may vary, but the trend is the same every day: 2 to 3 times more pages are crawled in the Webmaster Tools and 5 to 10 times more time to download the pages. And it's similar on other domains as well.

I can not find an explanation for that. Are there other users we need to monitor?
What else ?

My logs are not updated in Sqlite via my interface in Python

Good morning friends!
I appeal to you again, this time in the interface that I do I have this problem, it turns out that from my database in the tabala price_list clinical studies are included with their price and the cost of the Maquila (the columns are called : estudios_clinicos) Then I created a button to eliminate which function is performing well and updated the table where the study no longer appears. This time, however, I've added another button to edit the study when needed. To update the study prices or the study's full name, the program indicates that my record has been updated if I want to update it "as is" since I have a pop-up dialog box pointing out that it has not been refreshed since I confirm it in the DBrowser and nothing happens, I wanted to see the values ​​entered on the screen of the terminal but not watched come out empty that I'm doing wrong?

                        def run_query_1 (self, query, parameters = ()): # Query in the price list table
with sqlite3.connect (self.db_lab) as conn:
cursor = conn.cursor ()
result = cursor.execute (query, parameter)
conn.commit ()
Return result
# Data query
query = & # 39; SELECT * FROM price_list ORDER BY id_folio DESC & # 39;
db_rows = self.run_query_1 (query)

def get_studios (self): # Accept the price list and display in a table

# Clean the table
records = self.tree.get_children ()
for item in records:
self.tree.delete (element)
# Retrieve data
query = & # 39; SELECT * FROM price_list ORDER BY id_folio DESC & # 39;
db_rows = self.run_query_1 (query) .fetchall ()
# Fill in data
for line in db_rows:
self.tree.insert (& # 39; & # 39;, 0, text = row[1], values ​​= (row[1]line[2]line[3]))

def delete_studios (self):
self.tree.item (self.tree.selection ())['text'][0]
          except IndexError as e:
messagebox.showinfo (& # 39; deleted studies & # 39 ;, & # 39; please select a study & # 39;)
to return

estudios_clinicos = self.tree.item (self.tree.selection ())['text']
          query = & # 39; DELETE FROM price_list WHERE clinical_classes =? & # 39;
self.run_query_1 (query, parameters = (estudios_clinicos,))
messagebox.showinfo (& # 39; deleted studies & # 39 ;, & # 39; deleted record & # 39;
self.get_studios ()

def edit_estudio (self):

self.tree.item (self.tree.selection ())['values'][0]
        except IndexError as e:
messagebox.showinfo (& # 39; Edit Study & # 39 ;, & # 39; Please select a study & # 39;)
to return
estudios_clinicos = self.tree.item (self.tree.selection ())['text']
          price = self.tree.item (self.tree.selection ())['values'][1]
          maquila = self.tree.item (self.tree.selection ())['values'][2]




          #Ventana alterna
self.edit_wind = Toplevel ()
self.edit_wind.title = & # 39; Edit Study & # 39;

# Previous study
Caption (self.edit_wind, text = & # 39; Previous Study: & # 39;) Grid (Row = 0, Column = 1)
Entry (self.edit_wind, textvariable = StringVar (self.edit_wind, value = studies_clinics), state = & nb;; readonly & rs;) raster (row = 0, column = 2)

# New study
Caption (self.edit_wind, text = & # 39; New Study: & # 39;) Grid (Row = 1, Column = 1)
self.new_name = entry (self.edit_wind)
self.new_name.focus ()
self.new_name.grid (row = 1, column = 2)

# Previous price
Label (self.edit_wind, text = & # 39; Previous Price: & # 39;). Grid (row = 2, column = 1)
Entry (self.edit_wind, textvariable = StringVar (self.edit_wind, value = price), state = & # 39; readonly & # 39;). Grid (row = 2, column = 2)
# New price
Label (self.edit_wind, text = & # 39; New price: & # 39;). Grid (row = 3, column = 1)
self.new_price = entry (self.edit_wind)
self.new_price.grid (row = 3, column = 2)

# Previous Maquila Prize
Label (self.edit_wind, text = & # 39; Previous Maquila Price: & # 39;). Grid (row = 4, column = 1)
Entry (self.edit_wind, textvariable = StringVar (self.edit_wind, value = maquila), state = & nb;; readonly & rs;) raster (line = 4, column = 2)
# New Maquila Prize
Label (self.edit_wind, text = & # 39; New maquila price: & # 39;). Grid (row = 5, column = 1)
self.new_price_maq = entry (self.edit_wind)
self.new_price_maq.grid (line = 5, column = 2)

Button (self.edit_wind, text = & # 39; update & 39 ;, command = lambda: self.edit_records (self.new_name.get (), studies_clinics, self.new_price.get (), price, self.new_price_maq.get (), maquila)) .grid (line = 7, column = 2, sticky = W)
print (self.new_name.get ()) # In this part, you will not see that the data entered by the user is empty
print (self.new_price.get ()) # same here
print (self.new_price_maq.get ()) #tambien here
print (clinical_clinics) #these are displayed
Print (Price)
Print (Maquila)

self.edit_wind.mainloop ()

def edit_records (self, new_name, studies_clinics, new_price, price, new_price_maq, maquila):
query = & # 39; UPDATE price_list SET clinics =?, price =?, maquila =? WHERE estudios_clinicos =? AND price =? AND maquila =? & # 39;
parameters = (new_name, studies_clinics, new_price, price, new_price_maq, maquila)
self.run_query_1 (query, parameter)
self.edit_wind.destroy ()
messagebox.showinfo (& # 39; update study & # 39 ;, & # 39; the study has been updated & # 39;) #aqui indicates that it has been edited correctly

self.get_studios ()

I hope you can help me to find the problem !!!
Thank you very much

Log Files – AWS S3 Bucket – Logs the opened bucket, which is accessed through a browser or a scan program

If an S3 bucket is opened whose directory list is displayed in a browser or a Python-based scan program, where should I check the logs for such a bucket?

If an S3 bucket is not public, but its objects are public. This means that the directory listing of the bucket is disabled. However, if an object URL is specified, the object can be downloaded. Where should one see the access protocol of such a bucket?

We can see all the logs that have opened the URL either through a browser or a program in Apache's access.log file. Similar files I'm looking for S3 bucket. I want to know if an IP has accessed the bucket or object URL.