computer networks – Can you explain this token bucket example?

Suppose the token-bucket specification is TB(1/3 packet/ms, 4 packets), and packets arrive at the following times, with the bucket initially full:

0, 0, 0, 2, 3, 6, 9, 12
After all the T=0 packets are processed, the bucket holds 1 token. By the time the fourth packet arrives at T=2, the bucket volume has risen to 1 2/3; it immediately drops to 2/3 when packet 4 is sent. By T=3, the bucket volume has reached 1 and the fifth packet can be sent. The bucket is now empty, but fortunately the remaining packets arrive at 3-ms intervals and can all be sent.

Source-:
http://intronetworks.cs.luc.edu/current/html/tokenbucket.html

What I know-:

Token bucket algorithm-:
1 token added to bucket in every $Delta t$ time.

For a packet to be transmitted, it must destroy the token.
That is all I know about token bucket algorithm

As far as the question is considered-:

TB(r,$B_{max};)$

Where

r=token fill rate token/sec

$B_{max};$=Bucket Capacity

Sheets histogram not displaying bucket values

Given a series of values (range:1->7.989000E-11) to create a histogram. Horizontal max set to .0000001 (with option selected Allow bounds to hide data). I try to view the ranges of the various buckets, but I only see 0.000.00. The vertical axis has the option to modify Number format, but no such option exists on the horizontal axis. Any ideas how to view the bucket range?

histogram with difficult to view range values

horizontal axis options, lacking ability to modify bucket range format

aws – Spring Boot e S3 Bucket

Boa noite pessoal, tudo bem? Bom me encontro com o seguinte problema, eu tenho uma aplicação que faz todo controle de permissões de arquivos que eu armazeno no bucket, porém eu tenho uma dúvida. Como eu posso garantir que um usuário só irá acessar aquilo que ele tem permissão sem a necessidade de que tenha que baixar o arquivo do bucket s3 e retornar pra ele? Pra ser mais específico eu quero que ele pegue o arquivo diretamente do s3 mas somente aqueles que ele tem permissão, já vi algo sobre url signed mas não tenho certeza de que isso vai atender minha necessidade.

c# – Simple Leaky Bucket Async and Low Footprint

I’m working on a simple Leaky Bucket algorithm.
I have found a lot of samples on the internet, but something always bothers me.
Most of them use Collections and DateTime to track current actions and calculate the time of a Delay.

So, I try a controversial (according to my co-workers) implementation, using 2 semaphores and a separate thread to do the leaky job.

That’s in production now, handling thousands of requests per second without a hitch.

I have 3 questions:

  1. Is it a problem to use a semaphore inside another?
  2. Do you guys see any problem with this code or this approach?
  3. Can we optimize something to gain better performance?

SimpleLeakyBucket.cs

namespace Limiter
{
    using System;
    using System.Threading;
    using System.Threading.Tasks;

    public class SimpleLeakyBucket : IDisposable
    {
        readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
        readonly SemaphoreSlim semaphoreMaxFill = new SemaphoreSlim(1, 1);
        readonly CancellationTokenSource leakToken = new CancellationTokenSource();

        readonly Config configuration;
        readonly Task leakTask;
        int currentItems = 0;

        public SimpleLeakyBucket(Config configuration)
        {
            this.configuration = configuration;
            leakTask = Task.Run(Leak, leakToken.Token); //start leak task here
        }

        public async Task Wait(CancellationToken cancellationToken)
        {
            await semaphore.WaitAsync(cancellationToken);

            try
            {
                if (currentItems >= configuration.MaxFill)
                {
                    await semaphoreMaxFill.WaitAsync(cancellationToken);
                }

                Interlocked.Increment(ref currentItems);

                return;
            }
            finally
            {
                semaphore.Release();
            }
        }


        void Leak()
        {
            //Wait for our first queue item. 
            while (currentItems == 0 && !leakToken.IsCancellationRequested)
            {
                Thread.Sleep(100);
            }

            while (!leakToken.IsCancellationRequested)
            {
                Thread.Sleep(configuration.LeakRateTimeSpan);

                if (currentItems > 0)
                {
                    var leak = Math.Min(currentItems, configuration.LeakRate);
                    Interlocked.Add(ref currentItems, -leak);

                    if (semaphoreMaxFill.CurrentCount == 0)
                    {
                        semaphoreMaxFill.Release();
                    }
                }
            }
        }


        public void Dispose()
        {
            if (!leakToken.IsCancellationRequested)
            {
                leakToken.Cancel();
                leakTask.Wait();
            }

            GC.SuppressFinalize(this);
        }


        public class Config
        {
            public int MaxFill { get; set; }
            public TimeSpan LeakRateTimeSpan { get; set; }
            public byte LeakRate { get; set; }
        }
    }
}    

Usage sample

namespace LimiterPlayground
{
    class Program
    {
        static void Main(string() args)
        {
            //will limite the execution by 100 per 10 second
            using var leaky = new SimpleLeakyBucket(new SimpleLeakyBucket.Config
            {
                LeakRate = 100,
                LeakRateTimeSpan = TimeSpan.FromSeconds(10),
                MaxFill = 100
            });

            Task.WaitAll(Enumerable.Range(1, 100000).Select(async idx =>
            {
                await leaky.Wait(CancellationToken.None);
                Console.WriteLine($"({DateTime.Now:HH:mm:ss.fff}) - {idx.ToString().PadLeft(5, '0')}");
            }).ToArray());

        }
    }
}

amazon web services – AWS: Read object file from S3 bucket to perform health checks with canary in Synthetics

Instead of doing health checks in the canary script I want to read all websites from an file that is located in an S3 bucket. Are there any way to do this?

Script in canary:


const s3 = new aws.S3({ apiVersion: '2006-03-01' });

exports.handler = async (event, context) => {

    // Get the object from the event and show its content type
    const bucket = 'bucketname';
    const key = decodeURIComponent('objectname.txt'.replace(/+/g, ' '));

    
    const params = {
        Bucket: bucket,
        Key: key,
    }; 
    
    try {
        const file = await s3.getObject(params).promise();
        const fileList = file.Body.toString();
        const customersArr = fileList.split("n");
        
        customersArr.forEach(function(host) {
            const options = {
                method: "GET",
                host: host,
                port: 80,
                path: "/to/healthcheck",
                timeout: 10000
            };

            var req = http.request(options, function(r) {
                console.log(host + ": " + r.statusCode);
            });
            req.on('error', function(err) {
                console.log("Host error: " + host);
                req.abort();
            });
            req.on('timeout', () => {
                console.log("Host error (code:2): " + host);
            req.abort();
            });
            req.end();

        });
        
        return 200;
      } catch (err) {
        console.log(err);
        const message = `Error getting object ${key} from bucket ${bucket}.`;
        console.log(message);
        throw new Error(message);
    }
};

Log output:

INFO: Request: https://bucketname.s3.eu-west-1.amazonaws.com/objectname.txt
INFO: Response: 200 OK Request: https://bucketname.s3.eu-west-1.amazonaws.com/objectname.txt
ERROR: Canary error: 
Error: Error getting object objectname.txt from bucket bucketname.

amazon web services – AWS S3 Policy: One non-public bucket, separate sub-folders for each user, restricted access

at the moment I’m struggling how to create a secure policy for my Amazon S3 bucket.
My plan is to have one bucket with several sub-folders for separate (IAM) users.
Access should only be programmatically with access-ID and secret key, not via console.

Conditions:

Each user should only have access to his own folder and should not see the other folders in the bucket.

Each user should only have the right to PutObject (store), GetObject (download), DeleteObject (delete) inside his folder.

Users should not be allowed to do anything else like creating own buckets; the stricter the better.

FYI:

The folders are meant for storage of each users system backups and personal data, so it’s crucial that no other user can see what’s inside an other user’s folder.

  • I found the following policy at Amazon but I’m not sure if this policy is strict enough to secure and restrict access like mentioned above.

  • And is “ListAllMyBuckets” really necessary or poses chances that every user could also see other buckets in my account like this example says?

{
    "Version": "2012-10-17",
    "Statement": (
        {
            "Effect": "Allow",
            "Action": (
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ),
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::bucket-name",
            "Condition": {
                "StringLike": {
                    "s3:prefix": (
                        "",
                        "home/",
                        "home/${aws:username}/*"
                    )
                }
            }
        },
        {
            "Effect": "Allow",
            "Action":(
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ),
            "Resource": (
                "arn:aws:s3:::bucket-name/home/${aws:username}",
                "arn:aws:s3:::bucket-name/home/${aws:username}/*"
            )
        }
    )
}

I’m quite new to AWS S3, so any help regarding my problem would be greatly appreciated.

Thanks!