amazon rds – Storing Analytic Data for Multi-Tenet SaaS with AWS Aurora

I have an app that a user can upload an excel sheet of analytics data to S3. I am wanting to trigger a Lambda function on upload to do some data processing and then write the analytics to a client’s organization’s database (I am using Aurora). Eventually we will be capturing live clickstream data but for now we are just using generated reports.

My question is is this best practice to just have all the analytics in a database with thousands and thousands of events in as many rows? I can see that your table supports a maximum of … a little over 4.29 billion rows. but does that mean I can just pile them into a giant table until then? If I am potentially getting 50k rows per month should I just not think worry about it until I see a performance hit (if I even ever see one)? Or am I, as a newbie, just worrying over nothing?

Ideally I don’t just want to make this thing work, I want to learn how to make something that lasts and scales. And reading the docs on Aurora it sounds like this shouldn’t be an issue but I don’t know if I am just not seeing something that will become an issue.

Thanks for any advice and feedback!

weapons – Order of operations for Aurora property Lasers

Laser weapons cannot do damage to invisible creatures. The Aurora property negates invisibility for 1 minute on a hit. At the moment I am assuming that a) you can still make attacks against an invisible target with a laser and b) it will still apply non-damage effects (correct me if these are incorrect). This would mean that a laser with aurora (such as via a Mechanic’s prototype weapon) would still negate invisibility for future shots.

If I fire a laser weapon with the Aurora property, does the (lack of) damage occur before their invisibility is removed, or after?

Relevant rules text:

Laser weapons emit highly focused beams of light that deal fire damage. These beams can pass through glass and other transparent physical barriers, dealing damage to such barriers as they pass through. Barriers of energy or magical force block lasers. Invisible creatures don’t take damage from lasers, as the beams pass through them harmlessly. Fog, smoke, and other clouds provide both cover and concealment from laser attacks. Lasers can penetrate darkness, but they don’t provide any illumination.

When an aurora weapon strikes a target, the creature glows with a soft luminescence for 1 minute. This negates invisibility effects and makes it impossible for the target to gain concealment from or hide in areas of shadow or darkness.

amazon rds – How do I connect to a serverless Aurora MySQL database outside of VPC using Mysql Workbench?

I'm having a lot of problems connecting to Aurora Serverless. I run the wizard and put the database on public subnets with a security group to allow traffic 3306There are no ACLs that block traffic and I can't connect!

I am trying to start a MySQL t2.rds instance with the same security groups, the same subnets and cannot connect! Is there anything funky you have to do with an Aurora Serverless Database? I will now try to start an ec2 instance ssh in and see if I can connect from there.

Additional Information I scaled the database to 0 ACU when it is inactive, but even if I try to set capacity for a period of time and connect, it does not work. I have no idea literarily why this doesn't work, user name / password correct, tried to reset it. Is there anything outside of the normal troubleshooting area that you need to do?

Optimization – Aurora PostgreSQL database with a slower query schedule than normal PostgreSQL for an identical query?

After migrating an application and its database from a classic PostgreSQL database to an Amazon Aurora RDS PostgreSQL database (both with version 9.6), we found that a particular query runs much more slowly on Aurora – about ten times slower – than on Aurora on PostgreSQL.

Both databases have the same configuration, be it for the hardware or the pg_conf.

The query itself is fairly simple. It is generated from our backend written in Java and uses jOOQ to write the queries:

with "all_acp_ids"("acp_id") as (
    select acp_id from temp_table_de3398bacb6c4e8ca8b37be227eac089
select distinct "public"."f1_folio_milestones"."acp_id", 
from "public"."f1_folio_milestones" 
left outer join 
    "public"."sa_milestone_overrides" on (
        "public"."f1_folio_milestones"."milestone" = "public"."sa_milestone_overrides"."milestone" 
        and "public"."f1_folio_milestones"."view" = "public"."sa_milestone_overrides"."view" 
        and "public"."f1_folio_milestones"."acp_id" = "public"."sa_milestone_overrides"."acp_id"
where "public"."f1_folio_milestones"."acp_id" in (
    select "all_acp_ids"."acp_id" from "all_acp_ids"

With temp_table_de3398bacb6c4e8ca8b37be227eac089 be a single column table f1_folio_milestones (17 million entries) and sa_milestone_overrides (Around 1 million entries) are similarly designed tables with indexes for all columns for the LEFT OUTER JOIN,

When we run it in the normal PostgreSQL database, the following query plan is generated:

Unique  (cost=4802622.20..4868822.51 rows=8826708 width=43) (actual time=483.928..483.930 rows=1 loops=1)
  CTE all_acp_ids
    ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.005 rows=1 loops=1)
  ->  Sort  (cost=4802598.60..4824665.37 rows=8826708 width=43) (actual time=483.927..483.927 rows=4 loops=1)
        Sort Key: f1_folio_milestones.acp_id, (COALESCE(, f1_folio_milestones.team_responsible))
        Sort Method: quicksort  Memory: 25kB
        ->  Hash Left Join  (cost=46051.06..3590338.34 rows=8826708 width=43) (actual time=483.905..483.917 rows=4 loops=1)
              Hash Cond: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))
              ->  Nested Loop  (cost=31.16..2572.60 rows=8826708 width=37) (actual time=0.029..0.038 rows=4 loops=1)
                    ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.009..0.010 rows=1 loops=1)
                          Group Key: all_acp_ids.acp_id
                          ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.006..0.007 rows=1 loops=1)
                    ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..12.65 rows=5 width=37) (actual time=0.018..0.025 rows=4 loops=1)
                          Index Cond: (acp_id = all_acp_ids.acp_id)
              ->  Hash  (cost=28726.78..28726.78 rows=988178 width=34) (actual time=480.423..480.423 rows=987355 loops=1)
                    Buckets: 1048576  Batches: 1  Memory Usage: 72580kB
                    ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28726.78 rows=988178 width=34) (actual time=0.004..189.641 rows=987355 loops=1)
Planning time: 3.561 ms
Execution time: 489.223 ms

And it runs pretty smoothly, as you can see – less than a second for the query.
In the Aurora instance, however, this happens:

Unique  (cost=2632927.29..2699194.83 rows=8835672 width=43) (actual time=4577.348..4577.350 rows=1 loops=1)
  CTE all_acp_ids
    ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.001..0.001 rows=1 loops=1)
  ->  Sort  (cost=2632903.69..2654992.87 rows=8835672 width=43) (actual time=4577.348..4577.348 rows=4 loops=1)
        Sort Key: f1_folio_milestones.acp_id, (COALESCE(, f1_folio_milestones.team_responsible))
        Sort Method: quicksort  Memory: 25kB
        ->  Merge Left Join  (cost=1321097.58..1419347.08 rows=8835672 width=43) (actual time=4488.369..4577.330 rows=4 loops=1)
              Merge Cond: ((f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))
              ->  Sort  (cost=1194151.06..1216240.24 rows=8835672 width=37) (actual time=0.039..0.040 rows=4 loops=1)
                    Sort Key: f1_folio_milestones.view, f1_folio_milestones.milestone, f1_folio_milestones.acp_id
                    Sort Method: quicksort  Memory: 25kB
                    ->  Nested Loop  (cost=31.16..2166.95 rows=8835672 width=37) (actual time=0.022..0.028 rows=4 loops=1)
                          ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.006..0.006 rows=1 loops=1)
                                Group Key: all_acp_ids.acp_id
                                ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.003..0.004 rows=1 loops=1)
                          ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..10.63 rows=4 width=37) (actual time=0.011..0.015 rows=4 loops=1)
                                Index Cond: (acp_id = all_acp_ids.acp_id)
              ->  Sort  (cost=126946.52..129413.75 rows=986892 width=34) (actual time=4462.727..4526.822 rows=448136 loops=1)
                    Sort Key: sa_milestone_overrides.view, sa_milestone_overrides.milestone, sa_milestone_overrides.acp_id
                    Sort Method: quicksort  Memory: 106092kB
                    ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28688.92 rows=986892 width=34) (actual time=0.003..164.348 rows=986867 loops=1)
Planning time: 1.394 ms
Execution time: 4583.295 ms

It actually has lower global costs, but takes almost ten times as much time as before!

Disabling merge links will reset Aurora to a hash join that indicates the expected execution time. However, permanent deactivation is not an option. Oddly enough, disabling nested loops gives an even better result while still using a merge join …

Unique  (cost=3610230.74..3676431.05 rows=8826708 width=43) (actual time=2.465..2.466 rows=1 loops=1)
  CTE all_acp_ids
    ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.004 rows=1 loops=1)
  ->  Sort  (cost=3610207.14..3632273.91 rows=8826708 width=43) (actual time=2.464..2.464 rows=4 loops=1)
        Sort Key: f1_folio_milestones.acp_id, (COALESCE(, f1_folio_milestones.team_responsible))
        Sort Method: quicksort  Memory: 25kB
        ->  Merge Left Join  (cost=59.48..2397946.87 rows=8826708 width=43) (actual time=2.450..2.455 rows=4 loops=1)
              Merge Cond: (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text)
              Join Filter: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text))
              ->  Merge Join  (cost=40.81..2267461.88 rows=8826708 width=37) (actual time=2.312..2.317 rows=4 loops=1)
                    Merge Cond: (f1_folio_milestones.acp_id = all_acp_ids.acp_id)
                    ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..2223273.29 rows=17653416 width=37) (actual time=0.020..2.020 rows=1952 loops=1)
                    ->  Sort  (cost=40.24..40.74 rows=200 width=32) (actual time=0.011..0.012 rows=1 loops=1)
                          Sort Key: all_acp_ids.acp_id
                          Sort Method: quicksort  Memory: 25kB
                          ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.008..0.008 rows=1 loops=1)
                                Group Key: all_acp_ids.acp_id
                                ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.005..0.005 rows=1 loops=1)
              ->  Materialize  (cost=0.42..62167.38 rows=987968 width=34) (actual time=0.021..0.101 rows=199 loops=1)
                    ->  Index Scan using sa_milestone_overrides_acp_id_index on sa_milestone_overrides  (cost=0.42..59697.46 rows=987968 width=34) (actual time=0.019..0.078 rows=199 loops=1)
Planning time: 5.500 ms
Execution time: 2.516 ms

We asked the AWS support team if it was still the problem, but we are wondering what could be causing this problem. What could explain such a difference in behavior?

Looking at some documentation for the database, I read that Aurora prefers cost over time – and therefore uses the query plan that has the lowest cost.

But as we can see, it is far from optimal given the response time … is there a threshold or setting that could make the database use a more expensive – but faster – query plan?

amazon rds – The AWS Aurora RDS 2nd Reader receives no requests

Running a Postgres Aurora RDS with one author and two readers.

One reader is used heavily, the other is hardly used according to CloudWatch.


The Laravel application to which this is linked points the reader host to the reader cluster endpoint, so Aurora can do the matching internally. However, this does not seem to be the case, but prefers only the 1st reader.

Is there a misconfiguration in the RDS or something else?

amazon web services – Error – Enabling or Disabling IAM Database Authentication for the DB Cluster (Mule4 RDS Connector Aurora)

I created the DBCluster with AWS-CLI. and I wanted to create DB Instance within the created cluster with Mule 4 RDS Connector. and if I give the values

  • DBclustername: cluster_name
  • DbInstance class: db.t2.small
  • DB Instance Identifier: testdbinstance
  • Engine: Aurora

it gives the following error:

The requested DB Instance will be a member of a DB Cluster. Enable or disable IAM database authentication for the DB Cluster. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: e2ccfd13-f684-400f-b83f-52943bea854b)

but as I create, I set IAM database authentication to false.

Apply a DDL statement with no downtime with Aurora Mysql

I own a Aurora Mysql 5.6.10a DB cluster with a master and a read, to which I need to apply a zero-downtime migration. I'm trying to create a comment box for accepting emojis by updating the encoding to utf8mb4. Since I'm doing this in millions of lines, this is an expensive fix.

alter table tablename modify `comments` text CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL;

Reading the AWS documentation examines how to run the update on a replica of my production database and then use the failover to swap instances. However, there are problems with the query crash. The steps I have taken are:

1) Update the cluster parameter group for the replica with the new encodings / sorts. Waiting for Resync.

2) The read_only flag has been updated to false for the replica for both the cluster parameter group and the parameter group. Waiting for the update. I also did a reboot.

3) SELECT @@global.read_only; shows 0 so it has been updated for this value.

Running the DDL query produces the following results:

ERROR 1290 (HY000): The MySQL server is running with the --read-only option so it cannot execute this statement

I also received:

ERROR 2013 (HY000): Lost connection to MySQL server during query

In the back of my mind, I realize that my whole approach is terrible. Instead, I'll simply add a new column at the bottom of my spreadsheet and update my app to point to the new column to reduce the risk.

But now that I have spent 2 hours doing my DDL, I am invested and I do not hate to understand why I am prevented from performing the above tasks. I suspect this is a limitation of Aurora / replication based on the type of query, but would like a binding response to the current events.


Ok, so adding a new column to my table seems to still result in a table lock. I did not think that would happen. What is the best course of action here?

Created Amazon Aurora Mysql Serverless Support Features?

I know that with mysql rds and mysql rds aurora there is a way to support the creation of mysql functions by changing them log_bin_trust_function_creators -> 1.

I do not see this option in Aurora without a server for MySQL. Is NOT there a way to do this?

Here is an example function:

        return SEC_TO_TIME(duration); 

Aurora Profit –

I am not an administrator / owner of the project! !!!

Zoom out

Online Date: 2019-10-23

Investment plans: 0.91% hourly wage for 5 days, including client

4.8% Daily for 40 days, including capital

241% After 5 days including capital

Minimum expenditures: $ 10

Recommendation Commission: 5%

Withdrawal Method Immediate

Licensed script

DDoS protection by DDOS-Guard

Accpet payment: Perfect Money, Payeer.

About the project:


For those already familiar with and for those new to the business, we offer a unique and profitable investment strategy. We investigate profitable bitcoin mining and cryptocurrency trading to participate in modern financial markets and to participate in the development of mining algorithms. In recent years, our experts have become acquainted with technical and financial aspects of the cryptocurrency. Recently, made a fateful decision to enter the international investment market and expand its geographic business. The company's financial activities in the UK and beyond are well known and we are strongly supported by many of our partners and clients around the world. As you know, most investment companies focus on one direction, which gives them a stable income. This can not be said about our team, as financially lucrative areas have accumulated on our side. First of all, is the group of like-minded professionals in the finance, business, trade and marketing sectors. We are involved in finding and developing opportunities to make money: trading in currencies and crypto currencies, and speculating profitably on securities. Our principle is that time is money, so we never lose it for free and spend only on revenue. Every new day brings us a new income. In the arsenal of our experts there are several financial instruments at the same time. First and foremost, we focus on trading fast returns. We do not miss any opportunity to earn money and fulfill all obligations to our numerous partners. The absence of government regulations in the field of crypto-currency allows us to avoid paying taxes and reduce the additional costs for businesses, offices and a significant workforce. To be more flexible and competitive, management has developed a versatile and interesting marketing plan for investors. A thorough theoretical foundation of our proposal allows us to maintain a surefire business, make timely profits, and certainly pay in full. Today, you can participate in the high yield investment program and start growing with We declare that the project is open to all participants with minimum requirements. If you have any questions, please contact our customer support. We look forward to a long and sustainable partnership with our investors!



Should I add my index after or before filling the table (MySQL Aurora)?

I know this has been asked in the past, but sometimes things change over time and I wanted to check it again.

I have a table with about 9 billion lines. Should I add the index before or after inserting the data? I use Aurora. Is it important if I add more than one index?

All I know is that you should do this after insertion, but one of my colleagues insists that it is faster to do it on the insertion.