amazon web services – Troubleshooting block device mappings with Packer to increase size of EBS volume

I’d like to understand more about what to do when you need to increase a block device size for a build. I have some examples that work successfully with ubuntu 18, and centos 7 that I use, but it seems like each time I’m in a slightly new scenario the old techniques don’t apply, and that is probably due to a gap in my knowledge of what is required.

Building an AMI with NICE DCV (remote display):

name: DCV-AmazonLinux2-2020-2-9662-NVIDIA-450-89-x86_64

I tried this in my packer template:

  launch_block_device_mappings {
    device_name           = "/dev/sda1"
    volume_size           = 16
    volume_type           = "gp2"
    delete_on_termination = true
  }

But then ssh never became available. I reverted to not doing any customisation, and took a look at the mounts:

(ec2-user@ip-172-31-39-29 ~)$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        482M     0  482M   0% /dev
tmpfs           492M     0  492M   0% /dev/shm
tmpfs           492M  740K  491M   1% /run
tmpfs           492M     0  492M   0% /sys/fs/cgroup
/dev/xvda1      8.0G  4.4G  3.7G  55% /
tmpfs            99M  8.0K   99M   1% /run/user/42
tmpfs            99M     0   99M   0% /run/user/1000

So I thought I should try this instead (which didn’t work):

  launch_block_device_mappings {
    device_name           = "/dev/xvda1"
    volume_size           = 20
    volume_type           = "gp2"
    delete_on_termination = true
  }

which resulted in these errors:

==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Authorizing access to port 22 from (0.0.0.0/0) in the temporary security groups...
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Launching a source AWS instance...
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Adding tags to source instance
    amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Adding tag: "Name": "Packer Builder"
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Error launching source instance: InvalidBlockDeviceMapping: Invalid device name /dev/xvda1
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami:         status code: 400, request id: 2c419989-dd53-48ef-bcb6-6e54e892d152
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: No volumes to clean up, skipping
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Deleting temporary security group...
==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Deleting temporary keypair...
Build 'amazon-ebs.amazonlinux2-nicedcv-nvidia-ami' errored after 1 second 820 milliseconds: Error launching source instance: InvalidBlockDeviceMapping: Invalid device name /dev/xvda1
        status code: 400, request id: 2c419989-dd53-48ef-bcb6-6e54e892d152

Would anyone be able to share some pointers on what I’d need to do to increase the volume size in this instance and why?

Selling Amazon aged buyer accounts with a purchase history

Hello,
I have many Amazon aged buyer accounts with a purchase history. I want to sell them.

US Amazon aged buyer accounts
CA Amazon aged buyer accounts
UK Amazon aged buyer accounts
Spain Amazon aged buyer accounts
Germany Amazon aged buyer accounts
France Amazon aged buyer accounts

Are available for sale.

Per Account price 10$
Accept: BTC and Paypal ( Friends and Family only )
Provide Account email address and password, Email address comes from Gmail yahoo or fakemailgenerator .

Support: Before the first login account I will replace new accounts if there logged-in issues. But After the logged account, there is no refund or replacement.

Contact Telegram: sabujdc
Skype: sabuj_geo

 

amazon web services – Path of connection between two EC2 instances

I have an EC2 instance running in my own VPC. One of my partners also has an EC2 running in their own VPC in AWS. The two instances connect to each other via TCP to exchange data. Connection is made through their DNS address.

I am wondering about two scenarios:

  • The instances are in separate regions
  • The instances are in the same regions

What is the path taken by the TCP connection between the two instances? Does it matter that they both live within AWS? When the instances are in the same region, does the connection ever leave AWS to an external network switch / router?

I will give $15 for your Amazon Order

I will give $15 for each Amazon order above $500
I will give $10 for each Amazon order above $100
I will give $5 for each Amazon order above $50

Payment will be made via Paypal or BTC

I will only accept technology/computer/electronics products

Like
amazon.*com/dp/B00S8HY0BW/
amazon.*de/dp/B006A2Q81M
SEMrush

Stocks Available for

Amazon.de 0/5
Amazon.es 0/4
Amazon.fr 0/4
Amazon.it 0/4
Amazon.co.uk 0/5
Amazon.com 0/5

Rules:

– You should live in the same country with Amazon region. (I will not pay if the shipment is over 5 days)
– I need to approve the product before the order.
– You will receive the payment when you approve the payment on your Amazon account.
– I will give negative feedback If you refund or send back the product.
– It is limited for new users, old users first.

 

amazon s3 – assets hosted on S3 behind Cloudfront used by several domains, the access-control-allow-origin does not vary

I have the following terraform

# Bucket to put backendphp assets (images/css/js)
#
resource "aws_s3_bucket" "assets" {
  bucket = local.workspace("assets_domain_name")
  acl    = "public-read"

  cors_rule {
    allowed_headers = ("*")
    allowed_methods = ("GET")
    allowed_origins = (
      "https://app.example.com",
      "https://admin.example.com",
      "https://backoffice.example.com"   
    )
    expose_headers  = ("ETag")
    max_age_seconds = 3000
  }

}


# pre-existing policy defined by AWS
data "aws_cloudfront_origin_request_policy" "this" {
  name = "Managed-CORS-S3Origin"
  # not compatible with tags
}

# pre-existing policy defined by AWS
data "aws_cloudfront_cache_policy" "this" {
  name = "Managed-CachingOptimized"
  # not compatible with tags
}

# Cloudfront distribution for "http" to "https" redirections for 'assets' subdomains
resource "aws_cloudfront_distribution" "assets_distribution" {
  origin {
    custom_origin_config {
      http_port              = "80"
      https_port             = "443"
      origin_protocol_policy = "http-only"
      origin_ssl_protocols   = ("TLSv1", "TLSv1.1", "TLSv1.2")
    }

    domain_name = aws_s3_bucket.assets.bucket_regional_domain_name
    origin_id   = aws_s3_bucket.assets.bucket
  }

  enabled             = true
  default_root_object = "index.html"

  default_cache_behavior {
    viewer_protocol_policy = "redirect-to-https"
    allowed_methods        = ("GET", "HEAD")
    cached_methods         = ("GET", "HEAD")
    target_origin_id       = aws_s3_bucket.assets.bucket
    min_ttl                = 0

    origin_request_policy_id = data.aws_cloudfront_origin_request_policy.this.id
    cache_policy_id          = data.aws_cloudfront_cache_policy.this.id
  }


  aliases = (
    aws_s3_bucket.assets.bucket,
  )



  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate.assets_certificate.arn
    ssl_support_method  = "sni-only"
  }

  depends_on = (
    aws_acm_certificate.assets_certificate,
  )
}

the goal is to have the assets stored once , and usable by the different “frontends” of my webapps.

As I use resource integry i.e <link rel="stylesheet" href="https://assets.example.info/build/tailwind.784de744.css" crossorigin="anonymous" integrity="sha384-K5W1t5mSLgPoYODxKuVqxYbaCfZko17QXhZn2cJKeIBgTpmpKoNLRZn+msahlR81"> , it triggers CORS verification

  • if I use the s3 bucket directly, it works the css are loaded on all domains
  • If I use the s3 buckets and i put * as the allowed origins , it works through cloudfront

HOWEVER

  • if i put the list of specific, whitelisted, allowed origin
  • I visit first app.example.com , the css is loaded correctly
  • I vist then admin.example.com the css is not loaded, because cloudfront has cached “allow-origin: https://app.example.com”

it seems it was the error precised by this other question

but that was solved by putting the Origin header as being whitelisted so that cloudfront use it to generate its cache key.

HOWEVER I already do that by using the Managed-CORS-S3Origin

Is there something I’m missing here or is there something that has changed since ?

How to migrate a SQL Server Erwin Mart to database to Aurora (Amazon RDS)

I want to migrate a SQL Server Erwin data Mart database to Aurora and trying to figure out what the easiest/quickest way to do that is.

Options to me seem to be:

  1. Saving models to the file system, repoint the application to the new mart database, then loading from the file system to the new database.
    https://support.erwin.com/hc/en-us/articles/360003443452-Java-scripts-that-automatically-save-a-mart-s-models-offline-to-a-drive [support.erwin.com]
    https://support.erwin.com/hc/en-us/articles/115002674131-ERWIN-DATA-MODELER-MART-API-RESOURCE-PAGE [support.erwin.com]

Has anyone got any experience using these apis?

  1. Export/Import.
    Mysql migration tool. https://www.mysql.com/products/workbench/migrate/
    Amazon migration tool
    Does anyone know if the schema is the same, can I simply export/import the data?

amazon web services – “This operation requires an interactive window station” error when launching GUI via jenkins on aws windows instance

I have a windows ec2 instance which runs a build server for a unity game, controlled by jenkins.

When running unity with the -batchMode command, I can make the game build successfully.

I’d like to run some automated tests inside unity, which require the physics system to be running, which can’t happen in batch mode. If I remove that command line parameter, I get this error:

<I> Failed to get cursor position:
This operation requires an interactive window station.

I know the GPU is powerful enough to run the game – if I remote desktop in, then I can run it at 30fps.

How do I get my ec2 instance to run a “window station” to make this launch successfully?

Free Giveaway of 50000 Product keywords from Amazon

I have been scraping keywords from many search engines such as Google, bing, yahoo, amazon and ebay for years and built a 450 Million+ keyword database till now and using it for many of my projects.

I have learned and started my business with digital point in 2005 and grown a lot because of this community. I’m giving 50K keywords for free and will be adding any free request to the fellow team members.
SEMrush

Below are the options I can provide.

1. You can request keywords and I can share a file locker URL. Once the survey is completed, you can download the file.
2. Selling this service in Fiverr and you can buy from there.
3. Selling the entire database at $10k.

If you are interested in any of these, please send me a PM.

50000 Keywords file for download – https://vimarsana.com/keywords-50000.txt

 

How to get Amazon campaign working correctly?

I’ve tried to get an Amazon Ad Campaign up and running on Seller Central but I’m confused because, whilst the campaign is flagged as “Delivering” under the Campaign Manager section of Seller Central, I am not seeing any data on the charts to suggest that any clicks have been made. Also, I have not had any charges. To me it looks like the campaign is not actually active and that I’ve done something wrong in its setup.

Any ideas what I can do to get this campaign running properly? I guess what I’m imagining I should see is that my product ad shows up in search results as “Sponsored” and that I’ll get charges for some clicks on this.