deployment – WordPress hosted on AWS Route 53

I was contacted by some colleague of mine, to help him on how to remove off the WordPress site from the domain that’s running on route 53,so that we deploy a new site,however,Having gone through a full documentation and some research, on how I can remove the site from running on the domain, still didn’t work out. The site doesn’t have an instance on EC2 but running on route53

How can I access WordPress site files and removing it from running on my domain and deploying a new site?

GITLAB AZURE PORTAL DEPLOYMENT

Before the update on Microsoft Azure deployment Center i can connect my gitlab repository and the portal successfully fetched the commits. But after the update i cannot deploy my gitlab repository to the Azure portal the same way. Does anyone know how to fix this?

Image for reference. Thanks!

LOGS SHOWS

continuous delivery – Blurred lines between deployment (Terraform) and build processes (Bazel) leading to an awkward build and release process

I am building a system that consists of multiple programs on many machines, some cloud services (such as RDS) and so on.

In an ideal world, I would like to supply some configuration (e.g. deployment keys, AWS credentials) and run a single “deploy all” command that will build and deploy everything.

I would also like it to be smart enough to not rebuild artefacts that have already been built or redeploy infrastructure that already exists.

Currently, I am using Bazel to build my artefacts (.so, .jar, Docker images, etc) and Terraform to provision my architecture (ECs, RDS, etc.).

Each of these tools is very good at what it does, and together, they cover builds and deployments. However, neither does everything (the desired “deploy all” command) and there are cases where they must interact in awkward ways.

For example, suppose I have a microservice written in JavaScript. This is compiled / bundled by Bazel. The bundle is then included in a Docker image along with some secrets generated by Terraform. The Docker image is built by Bazel. Finally, The Docker image is deployed using Terraform!

  1. Bazel builds the application code
  2. Terraform generates / fetches secrets
  3. Bazel builds a Docker image
  4. Terraform deploys the Docker image

I am jumping between the two tools and it doesn’t feel like the right way to approach this.

  • Should I wrap Terraform in Bazel and only interact with Bazel?
  • Should I wrap Bazel in Terraform and only interact with Terraform?
  • Should I use some third tool to manage them?
  • How can I resolve this?

deployment – How to connect Netbeans IDE to AWS Bitnami Drupal 8 installation?

I’m hosting a Drupal 8 installation on AWS using the recommended Bitnami image. I’d like to be able to edit some of the files remotely from my computer using the NetBeans IDE. I’ve successfully created a NetBeans project and I’m connected remotely to the installation at /opt/bitnami/drupal. However, the project doesn’t show key directories there, like /modules, /sites, /profiles or /themes. I can’t edit the associated files.

Those directories are all owned by root with permissions 0777.

Has anyone run into this and solved it?

active directory – Hyper-V Server deployment system requirements. AD, Fileserver, SQL Server

We are looking to deploy a new system to meet new conformance requirements. Instead of setting up 6 different servers I decided it would probably be better to setup 2 Hyper-V servers with 3 VMs each to host our DCs, Fileservers, and SQL Server.

I’ve been assured that we can do this, not sure if it’s the best solution but it’s what we’ve got thus far. Anyway, the problem is that we keep asking for quotes and they keep coming in at $30,000+ but running the requirements in my head I keep coming up with much lower costs. Am I missing something?

Requirements:

  • 30-40 Users
  • 50-60 Devices
  • Server Failover
  • 8 TB RAID 10 (16 TB Total)
  • Active Directory
  • DNS
  • DHCP
  • Fileserver
  • SQL Server

What I think will do:

  • 2 – 4 x Windows Server 2019 Licenses (16 Cores each)
  • 2 x 16 Core Xeon or Epyc CPUs
  • 8 x 16 GB RAM (64 GB for each Hyper-V Host)
  • 8 x 4 TB SATA HDD (16 TB Total / 2 Servers / 2 for RAID 10 = 8 TB Storage for each server)

What they are recommending:

  • 40 x Windows Server 2019 2-Core Licenses (80 CORES!?!?!)
  • 4 x 10 Core Xeon CPUs (Do threads count as cores? That’s the only way I can see the 80 Cores above.)
  • 16 x 16 GB RAM (128 GB Per server for 256 GB Total)
  • 4 x 240 GB SATA SSD (OS Drive)
  • 12 x 6 TB SATA HDD (Data Drive, 72 TB Total for 36 TB on each server making it 18 TB in RAID 10)

I’m sorry to ask but every time I talk to them to ask why they give me a bunch of sales jargon that doesn’t make much sense from a technical perspective.

Safe deployment for database content

My application is deployed on several servers that all read data from a single database.

Following safe deployment practices, code deployments happen first on a subset of servers and only continue to the remaining servers if no issues are detected.

I’d like to also employ safe deployment practices for some critical data that is stored in the database. When appending new rows to a particular table, I’d like those rows to only be consumed by a subset of servers. The remaining servers will only consume the data if no issues are detected for a period of time.

How can I do this given that i only have a single database?

sql server – SSRS HA Deployment Advice

wanted to ask if my design is on the right track for installing SSRS.
Currently i have 2 nodes in an Always ON AG (2017 Enterprise Edition).
There is an application update coming up that will require the use of SSRS. No users will access the SSRS portals and the application will display the reports.

If the SSRS server is not available the application will work unless you go to the section that shows the reports, but everything else will work.

My question is, should I stand up SSRS on the two SQL Nodes in a scale out deployment and have them sitting behind my Net Scaler load balancer?
Should I just setup a 3rd box and get SSRS installed on a separate box to make the config simple like a 3rd node (Sync or Async) with manual failover to host the ssrs server and its databases?

Just trying to see what is the best angle to go here to give me the lease amount of headaches in the future.

I think the scale out deployment is a good way to go due to not having yet another server to manage, but i then need to depend on the net scaler admin to make sure they are doing their jobs in the setup, but i would like to get some feedback from you guys.

Thanks,

aws – Best choice of Amazon services for deployment of a video processing mobile app

I need help to decide on the backend infrastructure for our senior design project. We will build a mobile application (on Flutter) where users record videos and/or audio of themselves and get emotion/mood predictions. However, I did some research about this, and, as far I see, video streaming might be time-consuming if we consider recording, sending to the backend, and then process to get a prediction.

Briefly, we will have an ML model, a Node.js RESTful API, and a Flutter mobile app at the end. Also, we plan to deploy our project on AWS. I want to ask which services should we use to design a performant backend?

I found the Kinesis Video Streaming (KVS) service of AWS, but I do not know if it is a good choice. Also, I do not know how to process videos coming from that service.

If someone can give me a simple idea of software architecture, I would be glad to hear about it.