concurrency – Why is my Python thread pool faster than my Go worker pool?

I have recently been digging into understanding Golang concurrency, in particular the use of channels and worker pools. I wanted to compare performance between Go and Python (as many have done) because I have mostly read that Go outperforms Python with regard to concurrency. So I wrote two programs to scan an AWS account’s S3 buckets and report back the total size. I performed this on an account that had more the 75 buckets totalling more than a few TB of data.

I was surprised to find that my Python implementation was nearly 2x faster than my Go implementation. This confuses me based on all the benchmarks and literature I have read. This leads me to believe that I did not implement my Go code correctly. While watching both programs run I noticed that the Go implementation only used up to 15% of my CPU while Python used >85%. Am I missing an important step with Go or am I missing something in my implementation? Thanks in advance!

Python Code:

Get the size of all objects in all buckets in S3
import os
import sys
import boto3

import concurrent.futures

def get_s3_bucket_sizes(aws_access_key_id, aws_secret_access_key, aws_session_token=None):

    s3client = boto3.client('s3')

    # Create the dictionary which will be indexed by the bucket's
    # name and has an S3Bucket object as its contents
    buckets = {}

    total_size = 0.0

    # Start gathering data...

    # Get all of the buckets in the account
    _buckets = s3client.list_buckets()

    cnt = 1
    with concurrent.futures.ThreadPoolExecutor(max_workers=50) as executor:
        future_bucket_to_scan = {executor.submit(get_bucket_objects, s3client, bucket): bucket for bucket in _buckets("Buckets")}

        for future in concurrent.futures.as_completed(future_bucket_to_scan):
            bucket_object = future_bucket_to_scan(future)

                ret = future.result()
            except Exception as exc:
                print('ERROR: %s' % (str(exc)))
                total_size += ret


def get_bucket_objects(s3client, bucket):

    name = bucket("Name")

    # Get all of the objects in the bucket
    lsbuckets = s3client.list_objects(Bucket=name)

    size = 0
    while True:
        if "Contents" not in lsbuckets.keys():

        for content in lsbuckets("Contents"):            
            size += content("Size")


    return size

# Main

if __name__=='__main__':
    get_s3_bucket_sizes(os.environ.get("AWS_ACCESS_KEY_ID"), os.environ.get("AWS_SECRET_ACCESS_KEY"))

Go Code:

package main

import (


type S3_Bucket_Response struct {
    bucket string
    count  int64
    size   int64
    err    error

type S3_Bucket_Request struct {
    bucket string
    region string

func get_bucket_objects_async(wg *sync.WaitGroup, requests chan S3_Bucket_Request, responses chan S3_Bucket_Response) {

    var size  int64
    var count int64

    for request := range requests {
        bucket := request.bucket
        region := request.region

        // Create a new response
        response := new(S3_Bucket_Response)
        response.bucket = bucket

        sess, err := session.NewSession(&aws.Config{
            Region: aws.String(region), 

        s3conn := s3.New(sess)

        resp, err := s3conn.ListObjectsV2(&s3.ListObjectsV2Input{
            Bucket: aws.String(bucket),

        if err != nil {
            if awsErr, ok := err.(awserr.Error); ok {

                switch awsErr.Code() {
                case "NoSuchBucket":
                    response.err = fmt.Errorf("Bucket: (%s) is NoSuchBucket.  Must be in process of deleting.", bucket)
                case "AccessDenied":
                    response.err = fmt.Errorf("Bucket: (%s) is AccessDenied.  You should really be running this with full Admin Privaleges", bucket)
            } else {
                response.err = fmt.Errorf("Listing Objects Unhandled Error: %s ", err)

            responses <- *response

        contents := resp.Contents
        size      = 0
        count     = 0

        for i:=0; i<len(contents); i++ {
            size  += *contents(i).Size
            count += 1

        response.size  = size
        response.count = count

        responses <- *response


func main() {

    var err  error
    var size int64
    var resp *s3.ListBucketsOutput
    var wg sync.WaitGroup

    sess, _ := session.NewSession()
    s3conn  := s3.New(sess)

    // Get account bucket listing
    if resp, err = s3conn.ListBuckets(&s3.ListBucketsInput{});err != nil {
        fmt.Println("Error listing buckets: %s", err)

    buckets := resp.Buckets
    size = 0

    // Create the buffered channels
    requests  := make(chan S3_Bucket_Request , len(buckets))
    responses := make(chan S3_Bucket_Response, len(buckets))

    for i := range buckets {

        bucket := *buckets(i).Name

        resp2, err := s3conn.GetBucketLocation(&s3.GetBucketLocationInput{                                                           
            Bucket: aws.String(bucket),                                                                                                       

        if err != nil {
            fmt.Printf("Could not get bucket location for bucket (%s): %s", bucket, err)

        go get_bucket_objects_async(&wg, requests, responses)

        region := "us-east-1"
        if resp2.LocationConstraint != nil {
            region = *resp2.LocationConstraint

        request := new(S3_Bucket_Request)
        request.bucket = bucket
        request.region = region

        requests <- *request        

    // Close requests channel and wait for responses

    cnt := 1
    // Process the results as they come in
    for response := range responses {

        fmt.Printf("Bucket: (%s) complete!  Buckets remaining: %dn", response.bucket, len(buckets)-cnt)

        // Did the bucket request have errors?
        if response.err != nil {

        cnt  += 1
        size += response.size


kvm virtualization – How do remove the default storage pool from a libvirt hypervisor, so that even after libvirtd restarts there is NO storage pool

I want to remove the default storage pool from my virt-manager AND NOT HAVE IT COME BACK BY ITSELF, EVER. I can destroy it and undefine it all I want, but when i restart libvirtd (for me thats “sudo systemctl restart libvirtd” in an arch linux terminal window), and restart virt-manager, the default storage pool is back, just like Frankenstein.

I don’t want a storage pool of any kind. I simply want to move from the dual-boot I have now (arch linux and windows) to running the two OS simultaneously. I intend to provision two physical disk partitions on the host to be disks on the guest, and I can do this via the xml that defines the domain.

Or am i required to have a storage pool no matter what?

ubuntu – How to best handle apt upgrade and php-fpm default pool www.conf

I currently have an Ubuntu 18.04 server with php-fpm installed. In the spirit of maintainability, instead of editing /etc/php/{version}/fpm/pool.d/www.conf I copied it and renamed it as {domain}.conf for two reason:
1) I might have more sites and thereby more pools,
2) when I run apt upgrade I don’t want to have to merge in changes from the package.

To avoid the default www.conf being loaded when starting the php-fpm service due to collisions with the site pool, I renamed it to www.conf.dpkg so that it is not loaded. But each time I upgrade the php-fpm package, apt asks me to choose what to do:

Configuration file '/etc/php/{version}/fpm/pool.d/www.conf'
 ==> Removed (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.

How can I make sure this doesn’t happen, or at least the changes get automatically funneled in the renamed file?

Make a client request using the app pool identity (PHP runs on IIS).

I posted this on Stack ( when I still thought I could deal with it deal with a software problem. However, this clearly appears to be a system problem.

The goal is to make an outbound request from a PHP 7.4 application running on IIS. I want this client request to authenticate to the remote server based on the identity under which the application pool is running.

The client uses cURL configured for NTLM. I know the software works because the client tries to authenticate against the computer identity when disabling fastcgi.impersonate. However, when I set fastcgi.impersonate to 1, the client doesn't send any credentials at all. What I want to send is the identity of the app pool, which is an AD user account.

To put it bluntly, I don't want to pretend to be that user on the remote computer – I'm just using the identity for authentication.

Is this expected behavior? Is there another Fastcgi setting that I am missing?

It may be worth noting that when I do this with the embedded PHP server on a development machine, the client sends the logged-in user's identity, which is more like what I want to see.

iis – Sharepoint 2016 events 6398 and 6306 Repeated application pool stopped

SharePoint 2016 Farm 2 Nodes frontend and application
Windows 2019 data center server

I have a problem with my SharePoint Services. I get a lot of events. Error 8306 No one can log in to the websites

I checked the .NET and it's set to FULL according to this article I found.

The SecurityTokenServiceApplicationPool continues to stop. I restart it and it stops immediately. Here are the events

Protocol name: application
Source: Microsoft SharePoint Products SharePoint Foundation
Date: 05.05.2020 11:20:48 a.m.
Event ID: 6398
Task category: timer
Level: critical
User: network spfarm
The Execute method of the Microsoft.SharePoint.Administration.SPUsageImportJobDefinition job definition (ID 5855313d-025b-4407-96e2-7e9b2edba8d2) raised an exception. See below for more information.

Access to the path "C: Program Files Common Files Microsoft Shared Web Server Extensions 16 LOGS" is denied. (Correlation = 9b794f9f-8839-e0f1-3ea7-bb55bf6ffaf9)
Event XML:




Access to the path "C: Program Files Common Files Microsoft Shared Web Server Extensions 16 LOGS" is denied. (Correlation = 9b794f9f-8839-e0f1-3ea7-bb55bf6ffaf9)

Protocol name: application
Source: Microsoft SharePoint Products SharePoint Foundation
Date: 05.05.2020 11:20:13 a.m.
Event ID: 8306
Task category: claims authentication
Level: error
User: network spfarm
An exception occurred when trying to issue a security token: The HTTP service at http: // localhost: 32843 / SecurityTokenServiceApplication / securitytoken.svc / actas is not available. This may be because the service is too busy or no endpoint was found to monitor the specified address. Please make sure the address is correct and try accessing the service again later.
Event XML:




The HTTP service at http: // localhost: 32843 / SecurityTokenServiceApplication / securitytoken.svc / actas is not available. This may be because the service is too busy or no endpoint was found to monitor the specified address. Make sure the address is correct, and try accessing the service again later.

Any ideas or suggestions

Is there a formula for the size of the connection pool?

I develop web services with event-driven architecture and wondered if there was a formula to count the optimal size of the connection pool. Even if it isn't, knowing what it depends on would be a big help, or seeing real-life measurements to know at least the magnitude. I tried unsuccessfully to find scientific articles on the subject, but maybe I used the wrong keywords.

AnyDice: Sum of dice pool + highest value from the same roll

Generally, if you want to do things with a cube pool, you should use a function that contains a sequence. This function is evaluated for every possible throw so that your sequence can be treated as a possible throw in the function.

We then only need to take the highest (which will be first for a generated sequence) and its sum. Here a sequence (thanks to Carcer who reminded me) is converted to its sum when used as a number so we can just add the two together.

function: doublehighest POOL:s {
  result: 1@POOL + POOL

output (doublehighest 2d6)