azure – Generic Service role on a Windows Failover Cluster – Destination Host Unreachable and Request Timed Out

I’m trying to setup a Generic Service role on a Windows Failover Cluster in Azure. The below is the setup:

Cluster: 2 Nodes, 1 Cloud Witness

  • VNET: 10.0.0.0/24, dcsubnet: 10.0.0.0/26, nodesubnet: 10.0.0.64/26, clientsubnet: 10.0.0.128/64
  • Domain Controller: Windows Server 2012, Nodes: Windows Server 2016 Datacenter
  • Node 1 IP (static): 10.0.0.68, Node 2 IP (Static): 10.0.0.73, Cluster
    IP (Static): 10.0.0.70, Cluster role (Generic Service) (Static):
    10.0.0.71

The issue is that when I ping both the Cluster and the Role from the node which is not the Owner Node, and the VMs in the other subnets, I always get “Destination Host Unreachable” (other VMs in the same subnet) or “Request Timed Out” (from other subnet VMs). I see that the CNO and VCO is created successfully in the AD, in DNS the A records are created correctly (I deleted and re-created it again).

Nslookup of both the cluster and the role gives the results from all machines.

In the Failover Cluster Manager, both the Cluster and the Role has an “Online” Status. What could be the reason why the ping is not working?

ssh – (error code 28) Resolving timed out after 5000 milliseconds Ubuntu server – DNS?

Intent:

I am trying to load images programmatically using cUrl, into Prestashop in a XAMP via SSH.

What I tried:

I have looked around and the issue may be on the setup of the server.

I tried to change the in the php.ini file the values of max_execution_time, memory_limit, max_input_vars, max_input_time but did not work.

I tried to ping the website where I am collecting the image:

ping brandsdistribution.com
PING brandsdistribution.com (109.233.123.248) 56(84) bytes of data.

and keeps running until I key interrupt it and it returns:

--- brandsdistribution.com ping statistics ---
135 packets transmitted, 0 received, 100% packet loss, time 137194ms

while if I ping google.com:

ping google.com
PING google.com (172.217.168.206) 56(84) bytes of data.
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=1 ttl=53 time=56.4 ms
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=2 ttl=53 time=47.6 ms
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=3 ttl=53 time=43.7 ms
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=4 ttl=53 time=80.0 ms
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 43.747/56.940/80.002/14.079 ms

Question

Is this a DNS issue? how can I fix it?

Error message

(1/1) Exception
file_get_contents_curl failed to download https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg : (error code 28) Resolving timed out after 5000 milliseconds

in Tools.php line 2162
at ToolsCore::file_get_contents_curl('https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', 5, null)

in Tools.php line 2235
at ToolsCore::file_get_contents('https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', false, resource)

in Tools.php line 2294
at ToolsCore::copy('https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', '/var/www/html/img/tmp/ps_importTmA1vB')

in productCreate.php line 107
at copyImg('68', '157', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', 'products', false)

in productCreate.php line 66
at addImage(object(Product), 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', true)

in productCreate.php line 44
at createProduct(array('PRODUCT', '107510', 'Nike', 'W-ZoomGravity', 'BQ3203-006_W-ZoomGravity', '18', '101.00', '81.00', '57.00', 'Genere:Donna - Tipologia:Sneakers - Tomaia:materiale sintetico, materiale tessile - Interno:materiale sintetico, materiale tessile - Suola:gomma', '2.00', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_2136019726.jpg', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1040197763.jpg', 'Vietnam', 'Nike', '', '', '', 'Scarpe', '', 'Sneakers', '', '', 'Continuativi', 'Rosa', '', 'pink,dimgray', 'Donna', '', '', '', '', ''))
in productCreate.php line 22

Gmail – Custom Google Scripts timed out

This script is used to download PDF attachments from my tagged emails into Gmail and names and sort them into folders in my Google drive. It always happens for some reason. Any ideas on how to customize it to get it done faster?

var GMAIL_LABEL = & # 39; Sales & # 39 ;;
var GDRIVE_FILE = & # 39; sales / $ y / $ sublabel / $ sublabel_ $ m- $ d- $ y_ $ mc_ $ ac. $ ext & # 39 ;;

/ * ————- no changes required ————————- * /

/ **
* Get all the marked threads in our label and process their attachments
* /
Function main () {
var label = getSubLabels (GMAIL_LABEL);
for (var i = 0; i

/ **
* Returns the Google Drive folder object that corresponds to the specified path
* *
* Creates the path if it doesn't already exist.
* *
* @param {string} path
* @return {folder}
* /
GetOrMakeFolder function (path) {
var folder = DriveApp.getRootFolder ();
var names = path.split (& # 39; / & # 39;);
while (names.length) {
var name = names.shift ();
if (name === & # 39; & # 39;) continue;

var folders = folder.getFoldersByName(name);
if(folders.hasNext()) {
  folder = folders.next();
} else {
  folder = folder.createFolder(name);
}

}}

Return folder;
}}

/ **
* Get all the specified labels and all sub-labels
* *
* @param {string} name
* @return {GmailLabel ()}
* /
Function getSubLabels (name) {
var label = GmailApp.getUserLabels ();
var match = ();
for (var i = 0; i

Return matches;
}}

/ **
* Get all the marked threads in the specified name
* *
* @param {GmailLabel} label
* @return {GmailThread ()}
* /
GetUnprocessedThreads (label) function {
var from = 0;
var perrun = 500; // maximum is 500
var threads;
var result = ();

to do {
threads = label.getThreads (from, perrun);
from + = perrun;

for(var i=0; i

} while (threads.length === perrun);

Logger.log (result.length + & # 39; threads to be processed in & # 39; + label.getName ());
Return result;
}}

/ **
* Get the extension of a file
* *
* @param {string} name
* @return {string}
* /
Function getExtension (name) {
var re = /(?:.((^.)+))?$/;
var result = re.exec (name);
if (result && result (1)) {
Return result (1) .toLowerCase ();
} else {
return & # 39; unknown & # 39 ;;
}}
}}

/ **
* Apply template variables
* *
* @param {string} Filename with placeholders for templates
* @param {info} Values ​​to fill out
* @param {string}
* /
Function createFilename (file name, info) {
var keys = Object.keys (info);
keys.sort (function (a, b) {
return b.length - a.length;
});

for (var i = 0; i

SaveAttachment function (attachment, path) {
var parts = path.split (& # 39; / & # 39;);
var file = parts.pop ();
var path = parts.join (& # 39; / & # 39;);

var folder = getOrMakeFolder (path);
var check = folder.getFilesByName (file);
if (check.hasNext ()) {
Logger.log (path + & # 39; / & # 39; + file + & # 39; already exists. File not overwritten. & # 39;);
Return;
}}
folder.createFile (attachment) .setName (file);
Logger.log (path + & # 39; / & # 39; + file + & # 39; saved. & # 39;);
}}

/ **
* @param {GmailThread} thread
* @param {GmailLabel} Label in which this thread was found
* /
Function processThread (thread, label) {
var messages = thread.getMessages ();
for (var j = 0; j

var attachments = message.getAttachments();
for(var i=0; i

}}

  var info = {
    'name': attachment.getName(),
    'ext': getExtension(attachment.getName()),
    'domain': message.getFrom().split('@')(1).replace(/(^a-zA-Z)+$/,''), // domain part of email
    'sublabel': label.getName().substr(GMAIL_LABEL.length+1),
    'y': ('0000' + (message.getDate().getFullYear())).slice(-4),
    'm': ('00' + (message.getDate().getMonth()+1)).slice(-2),
    'd': ('00' + (message.getDate().getDate())).slice(-2),
    'h': ('00' + (message.getDate().getHours())).slice(-2),
    'i': ('00' + (message.getDate().getMinutes())).slice(-2),
    's': ('00' + (message.getDate().getSeconds())).slice(-2),
    'mc': j,
    'ac': i,
  }
  var file = createFilename(GDRIVE_FILE, info);
  saveAttachment(attachment, file);
}

message.unstar();

}}
}}

Troubleshooting 500 Internal server error: connection timed out

When i charge www.example.com/wp-adminI get a browser error:

This website cannot be reached. www.example.com took too long to respond.

  • There was none error_log. I switched on debug.logand I got some PHP Deprecated & PHP Notice Notes on plugins.
  • I have renamed plugins to plugins.temp.
  • I increased the PHP memory limit from 128M to 2G.
  • The current PHP time limit is 300. The site needs approximately 20 seconds to generate the error.

I reloaded /wp-admin and get the same error. Debug.log no longer displays additional debug information.

I'd like to get some tips on diagnosing and fixing this problem.

Help appreciated.

www.dreamteammoney.com | 522: Connection timed out

What happened?

The initial connection between the Cloudflare network and the originating web server has expired. As a result, the website cannot be displayed.

What can I do?

If you are a visitor to this website:

Please try again in a few minutes.

If you are the owner of this website:

Contact your hosting provider to let them know that your web server is not processing requests. Error 522 means that the request could connect to your web server, but the request was not completed. The most likely cause is that something on your server is consuming resources. Additional troubleshooting information can be found here.

SSH connection timed out on Azure VM

When trying to access an Azure VM, there are some problems debugging a "connection timeout error". A "connection denied" error is now displayed to a user who previously had access to the VM via SSH. All other users can still connect. I can also connect to the user account when I log in from my computer.

No firewall is running on the VM itself. Instead, the connection is managed through the Network Security Group (NSG) in which the VM is located. The SSH port has a security rule that only allows IP addresses on the whitelist. This user is whitelisted and I have confirmed that their IP address has not changed.

I also checked the NSG logs through Network Watcher and found that SSH traffic is accepted by the user's IP address. However, I see no reference to the user's attempt to connect in the VM's authentication log. I also ran fail2ban and can see that the user is not in a prison.

Does anyone have any suggestions on what to see next? I suspected a problem on the user side, but if I can see their traffic in the NSG logs, is it safe to assume that their connection is OK? What other situations can lead to a "Connection timed out" error?

development – Provider Hosted App timed out after x minutes

I have an app hosted by a provider that I use to enter offer data in Sharepoint Office 365.
The host is an MVC C # website.

The actions in the controllers all have a format similar to the following:

        (SharePointContextFilter)
    (HttpPost)
    public ActionResult SaveSOW(int id,
        string SPHostUrl,
        bool withCostings = false)
    {
        try
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);

            using (var clientContext = spContext.CreateUserClientContextForSPHost())
            {
                var spUser = clientContext.Web.CurrentUser;
                clientContext.Load(spUser, user => user.Title);
                clientContext.ExecuteQuery();
                var sowRepo = new SOWRepository(clientContext);
                sowRep.SaveSow(data); //pseudo code 

             }
}

All actions (except the entry point) are called via Jquery AJAX calls.

The problem is that after X minutes when Ajax invokes an action decorated with (SharePointContextFilter), an HTTP 302 error (forwarding to login) is returned.
I cannot ask the user to log in again after entering an offer (this may take an hour or more).
I tried to do clientContext:

        clientContext.RequestTimeout = Timeout.Infinite;

without success.

I also tried adding a loop with javascript to continue performing an action in the controller with (SharePointContextFilter).

    GetRefreshToken: function () {
        $.ajax({
            url: "/Refresh/RefreshToken?" + window.location.href.slice(window.location.href.indexOf('?') + 1),
            type: "GET",
            success: function (data) {
                console.log(data)
            },
            error: function (data) {
                console.log("Error refreshing token", data);
            }
        })

After a while, it still sends the AJAX update request to appredirect.aspx

My update action is just:

(SharePointContextFilter)
(HttpGet)
public ActionResult RefreshToken()
{
    var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);


    using (var clientContext = spContext.CreateUserClientContextForSPHost())
    {
        if (clientContext != null)
        {

            return Json("Token Refreshed", JsonRequestBehavior.AllowGet);
        }
    }

    return Json("Token NOT Refreshed", JsonRequestBehavior.AllowGet);
}

}

locktime – At what block height can transactions with timed lock be included?

Transactions with expenses whose sequence is less than UINT_MAX are interpreted as blocked until the time stamp specified in or the block height is reached nLockTime is reached.

Especially with regard to the block height, I read inaccurate or inconsistent information about whether a transaction can be contained in a block of height > nLockTime or height ≥ nLockTime, Which of the two is the case?

ubuntu – The connection to the Thunderbird server timed out

I have a 13 GB mailbox full. There are a lot of photos in my email inbox as attachments that I suspect took up most of the space.

I configured Thunderbird on my Ubuntu virtual machine and tried to download all the emails, but since 13 GB is too much to download, it obviously caused problems. So someone suggested that I register for Google Takeout and download the data in Mbox format.
I tried. I have a slow internet connection and the connection is disconnected from time to time. So it could not be fully downloaded and this led to more problems as Google only allowed 2-3 times when it created an archive with downloadable data. With my type of internet connection (which is slow) I have exhausted my trial limit for downloading my email data.
I get a message that I have exceeded the download limit

Now I have switched to another method. I installed Thunderbird in Ubuntu 19.10 on a VMware workstation.
I tried to configure it for use with Gmail.
But I'm getting a timeout problem.

Connection to imap.google.com timed out

How can I fix this problem while downloading all 13 GB email with all attachments under these conditions?
Or is there a better solution to this problem that I do not know?

I want to make a backup copy of the emails first and then delete them one by one by going through them all.
I understood that I deleted 675,555 emails 3-4 days ago. I deleted it from the Thunderbird user interface. I don't know if they were actually deleted from Gmail or not. There are thousands of them I still have to check.
I have probably deleted more than the required mail and now I find it difficult to sort and search it.
I read a question here
In the last reply to the link above, the original poster mentions that photos from the smartphone have been synced and that the quota shown in Gmail has been exceeded. However, I do not have this case as I understand it, as I am not using this account in the best sense to backup photos.
It also shows me the following in my email box at this link:

13 GB full

I think this is just a problem with my Gmail inbox.