sharepoint online – Logic App deployment azure Resource group

I have two Resource group>
1)Azuredev-LogicApp > (Read write permission)
2)Azureacc-LogicApp >(Read permission)

I had created Logic-App in Azuredev-LogicApp which will update data in SharePoint site(Dev environment having Full permission)
as Recurrence Job at every 6 month i am using send http request to update Modified date.

Now i have to move this Logic App to another resource group Azureacc-LogicApp and Sharepoint site(Acceptance site where i have read permission)

As this is my first app , so like to know how deployment and authorization will work in acceptance environment

Is logic App will work with usercontext similar like MSflow ,and deployment is similar like export import ?

virtual machines – How to specify a VM is preemptible in a GCP deployment?

I’m working on an in-house implementation of a Google Cloud deployment for a Docksal sandbox VM instance. The sandboxes contained within can be considered ephemeral and can be rebuilt very easily. Therefore I would like to configure the VM to be preemptible rather than have it always be on.

I’m basing this deployment off of this repo https://github.com/docksal/sandbox-server/tree/develop/gcp-deployment-manager. Specifically, the contents of https://github.com/docksal/sandbox-server/blob/develop/gcp-deployment-manager/Docksal.jinja is what contains the server resource.

How can/should Docksal.jinja be modified to specify the VM should be preemptible?

Dedicated Servers in US/EU/IN ✅ Ready Stock – 24 Hour Deployment ✅ Free 5 IPv4 ✅ Premium Network


Cenchu is one of the rapidly growing website hosting company based in India. We offer high-quality dedicated servers with unparalleled support around the clock. Our servers are hosting in TIER 3 and TIER 4 datacenter facilities ensuring greater uptime, and reliability.

Our Plans: (Note: Coupon will be automatically added on checkout • These plans are managed, when you choose unmanaged, you get 10$ additional off for lifetime)

Xeon Quad Core • 16GB RAM • 2TB HDD • 20TB Bandwidth • $99/mo. • Buy Now

Xeon Hexa Core • 24GB RAM • 2TB HDD • 20TB Bandwidth • $139/mo. • Buy Now

Xeon Octa Core • 32GB RAM • 2TB HDD • 20TB Bandwidth • $179/mo. • Buy Now

NOTE: We can offer custom configuration servers at any of our Datacenter facilities. All servers in Kansas include 5 IPv4 and all other locations come with only 1 IPv4. You can get more IP addresses with any of our servers at any location for a small additional cost with or without IP justification. If you need servers with multiple IPs, you can feel free to contact us.

Datacenter Locations:

  • Seattle, USA
  • Chicago, USA
  • Kansas, USA
  • Dallas, USA
  • Los Angeles, USA
  • Miami, USA
  • New York City, USA
  • Phoenix, USA
  • San Francisco, USA
  • Frankfurt, Germany
  • Mumbai, India
  • Pune, India
  • Chennai, India
  • Delhi, India

Why our Dedicated Servers?

  • Hosted in TIER 3 and 4 DC Facilities
  • Quality-Checked Enterprise Graded Hardware
  • FREE Full-Management
  • Free DirectAdmin Control panel
  • Around the clock support from experts
  • 99.99% Uptime SLA
  • 24×7 Monitored Network
  • Free uptime monitoring

Payment Methods Accepted:

– PayPal

– Bitcoin

– UPI, Net-Banking, Indian Wallets, Visa, and MasterCard

Contact us for custom configurations and custom quotes. Doubts? Or Queries? We’re available around the clock to help you out

deployment – How to prevent Windows 10 feature updates from creating a recovery partition?

In our lab, we have a few dozens of dual-boot computers installed with Windows 10 and Ubuntu, using either legacy (CSM) or UEFI boot depending on the machine. In both cases, GRUB2 is used as a bootloader. All the machines have a custom partitioning scheme without a Windows recovery partition, as the latter is not needed. If a machine has issues, we rather reimage it from the network (using a PXE booted Linux).

Now the problem comes with Windows feature updates, the latest being version 2004. When one of these updates installs, it somewhat shrinks the Windows system partition to add back the recovery partition. Given its reduced size it would be fine to leave it alone, but this operation breaks GRUB as the Linux partition changes number, and the first stage of GRUB doesn’t understand partition labels to find its second stage (even on UEFI machines). If a machine is hanged on a GRUB rescue prompt, there’s no way to remotely correct the issue… sometimes leaving offline for hours (if not more) machines that must remain remotely accessible.

As we don’t have any use for the recovery partition, the question is how to prevent the updates from creating it in the first point? Of course we could have it there when imaging the machines, but given that we don’t actually want it, it seems just a waste of disk space.

web development – What is a good strategy for moving to Continuous Deployment?

My Goal

I’m currently in the process of trying to get my company to adopt Continuous Deployment of the web product I work on. As far as I know, we’re the first product in the company to attempt this. I’d like to know any pain points people have run into when moving to CD or tips people can give me who currently work at a job that uses CD. Is there a specific strategy you used to move your company to CD?

Steps I’ve taken:

In order to get my product ready for CD, I’ve taken several steps.

  • Metrics, metrics, metrics. Because it’s going to be a culture shift to go from deploying once every 3 months to daily, I know that I’ll need data to support my ideas. I’m starting to track application performance, bugs per release, number of releases, and test coverage percentage.

  • I built out a pipeline that can deploy to a small subset of production servers to make sure the release works without affecting a large number of users.

  • I built out automated UI test suits

Does anyone have a specific strategy/actionable steps that they used to move their company to CD?
I’m also looking for any general advice/things to watch out for but that doesn’t fit the Q/A format very well.

deployment – How can I deploy an existing local site to Pantheon?

I have an existing local site that I’ve developed. In most “normal” instances, I would simply FTP the everything to the external host server, upload and install the database, and life would be good.

With Pantheon, it is frustratingly restricted. There appears to be no control over defining where Apache is configured to serve from. There are restrictions over file structure. And we can not SSH onto the server.

Pantheon appears to be a popular choice – but it appears that there is an assumption that a developer initiates the Drupal instance from Pantheon with all of its inherent configurations

Any experience actually creating a new instance of Drupal on Pantheon from an existing local environment?

BTW – their answer is to use drush ard to “archive the site”.. unfortunately, ard is no longer supported (Drush 9.x).

microservices – Maintaining Objects Across API Deployment Instances

I am working on a web application as a hobby and trying to learn some concepts related to cloud development and distributed applications. I am currently targeting an AWS EC2 instance as a deployment environment, and while I don’t currently have plans to deploy the same instance of my API application to many servers, I would like to design my application so that is possible in the future.

I have a search operation that I currently have implemented using a Trie. I am thinking that it would be slow to rebuild the trie every time I need to perform the search operation, so I would like to keep it in memory and insert into it as the search domain grows. I know that if I only wanted to have one server, I could just implement the trie structure as a singleton and dependency inject it. If I do this in a potentially distributed application, though, I would be opening myself up to data consistency issues.
My thought was to implement the trie in another service and deploy it separately and make requests to it (this sounds like micro service concepts, but I have no experience with those). Is this common practice? Is there a better solution for maintaining persistent data structures in this way?

Explain a recurring crash in Postgresql deployment

I'm having problems with unexpected system crashes resulting from Postgresql deployment on a moderately busy production server (Django 1.8 app deployed on a DO droplet with one) gunicorn Application server and nginx Reverse proxy).

I am an accidental DBA and need the opinion of an expert to put together what's going on. I have no idea at the moment; I will provide various log data below:

The entire app crashes – a new relic shows the following error:

django.db.utils:OperationalError: ERROR: no more connections allowed (max_client_conn)

Full stack trace is:

File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gevent/baseserver.py", line 26, in _handle_and_close_when_done
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 155, in handle
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/base_async.py", line 56, in handle
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 160, in handle_request
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/base_async.py", line 114, in handle_request
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/api/web_transaction.py", line 704, in __iter__
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/api/web_transaction.py", line 1080, in __call__
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 189, in __call__
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 108, in get_response
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/hooks/framework_django.py", line 228, in wrapper
File "/home/ubuntu/app/mproject/middleware/mycustommiddleware.py", line 6, in process_request
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/utils/functional.py", line 225, in inner
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/utils/functional.py", line 376, in _setup
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 22, in 
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 10, in get_user
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 174, in get_user
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/contrib/auth/backends.py", line 93, in get_user
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/models/query.py", line 328, in get
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/models/query.py", line 144, in __len__
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/models/query.py", line 965, in _fetch_all
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/models/query.py", line 238, in iterator
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 838, in execute_sql
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 164, in cursor
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 135, in _cursor
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/utils.py", line 98, in __exit__
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 119, in connect
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/hooks/database_dbapi2.py", line 102, in __call__
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 126, in connect
File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/psycogreen/gevent.py", line 32, in gevent_wait_callback

New relic also shows a big increase Postgres set::

Enter the image description here

The slow Postgresql protocol shows exorbitant COMMIT durations as follows:

2020-05-15 01:22:47.474 UTC (10722) ubuntu@myapp LOG:  duration: 280668.902 ms  statement: COMMIT
2020-05-15 01:22:47.474 UTC (11536) ubuntu@myapp LOG:  duration: 49251.277 ms  statement: COMMIT
2020-05-15 01:22:47.474 UTC (10719) ubuntu@myapp LOG:  duration: 281609.394 ms  statement: COMMIT
2020-05-15 01:22:47.474 UTC (10719) ubuntu@myapp LOG:  could not send data to client: Broken pipe
2020-05-15 01:22:47.474 UTC (10719) ubuntu@myapp FATAL:  connection to client lost
2020-05-15 01:22:47.475 UTC (10672) ubuntu@myapp LOG:  duration: 285371.872 ms  statement: COMMIT

The Digital Ocean dashboard shows an increase Load Average and Disk IO (but CPU, memory usage and disk usage stay low).

Enter the image description here

The snapshot statistics at the time Load Average Spikes are:

Enter the image description here

syslog has countless instances of the following error log:

May 15 01:22:46 main-app gunicorn(10779): error: (Errno 111) Connection refused
May 15 01:22:46 main-app gunicorn(10779): (2020-05-15 01:22:46 +0000) (10874) (ERROR) Socket error processing request.
May 15 01:22:46 main-app gunicorn(10779): Traceback (most recent call last):
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/base_async.py", line 66, in handle
May 15 01:22:46 main-app gunicorn(10779):     six.reraise(*sys.exc_info())
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/base_async.py", line 56, in handle
May 15 01:22:46 main-app gunicorn(10779):     self.handle_request(listener_name, req, client, addr)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 160, in handle_request
May 15 01:22:46 main-app gunicorn(10779):     addr)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/base_async.py", line 129, in handle_request
May 15 01:22:46 main-app gunicorn(10779):     six.reraise(*sys.exc_info())
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gunicorn/workers/base_async.py", line 114, in handle_request
May 15 01:22:46 main-app gunicorn(10779):     for item in respiter:
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/api/web_transaction.py", line 704, in __iter__
May 15 01:22:46 main-app gunicorn(10779):     for item in self.generator:
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/api/web_transaction.py", line 1080, in __call__
May 15 01:22:46 main-app gunicorn(10779):     self.start_response)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 189, in __call__
May 15 01:22:46 main-app gunicorn(10779):     response = self.get_response(request)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 218, in get_response
May 15 01:22:46 main-app gunicorn(10779):     response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/hooks/framework_django.py", line 448, in wrapper
May 15 01:22:46 main-app gunicorn(10779):     return _wrapped(*args, **kwargs)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/hooks/framework_django.py", line 441, in _wrapped
May 15 01:22:46 main-app gunicorn(10779):     return wrapped(request, resolver, exc_info)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 256, in handle_uncaught_exception
May 15 01:22:46 main-app gunicorn(10779):     'request': request
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/logging/__init__.py", line 1193, in error
May 15 01:22:46 main-app gunicorn(10779):     self._log(ERROR, msg, args, **kwargs)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/logging/__init__.py", line 1286, in _log
May 15 01:22:46 main-app gunicorn(10779):     self.handle(record)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/logging/__init__.py", line 1296, in handle
May 15 01:22:46 main-app gunicorn(10779):     self.callHandlers(record)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/logging/__init__.py", line 1336, in callHandlers
May 15 01:22:46 main-app gunicorn(10779):     hdlr.handle(record)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/logging/__init__.py", line 759, in handle
May 15 01:22:46 main-app gunicorn(10779):     self.emit(record)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/utils/log.py", line 129, in emit
May 15 01:22:46 main-app gunicorn(10779):     self.send_mail(subject, message, fail_silently=True, html_message=html_message)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/utils/log.py", line 132, in send_mail
May 15 01:22:46 main-app gunicorn(10779):     mail.mail_admins(subject, message, *args, connection=self.connection(), **kwargs)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/api/function_trace.py", line 110, in literal_wrapper
May 15 01:22:46 main-app gunicorn(10779):     return wrapped(*args, **kwargs)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/mail/__init__.py", line 98, in mail_admins
May 15 01:22:46 main-app gunicorn(10779):     mail.send(fail_silently=fail_silently)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/newrelic-2.56.0.42/newrelic/api/function_trace.py", line 110, in literal_wrapper
May 15 01:22:46 main-app gunicorn(10779):     return wrapped(*args, **kwargs)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/mail/message.py", line 303, in send
May 15 01:22:46 main-app gunicorn(10779):     return self.get_connection(fail_silently).send_messages((self))
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/mail/backends/smtp.py", line 100, in send_messages
May 15 01:22:46 main-app gunicorn(10779):     new_conn_created = self.open()
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/django/core/mail/backends/smtp.py", line 58, in open
May 15 01:22:46 main-app gunicorn(10779):     self.connection = connection_class(self.host, self.port, **connection_params)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/smtplib.py", line 256, in __init__
May 15 01:22:46 main-app gunicorn(10779):     (code, msg) = self.connect(host, port)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/smtplib.py", line 316, in connect
May 15 01:22:46 main-app gunicorn(10779):     self.sock = self._get_socket(host, port, self.timeout)
May 15 01:22:46 main-app gunicorn(10779):   File "/usr/lib/python2.7/smtplib.py", line 291, in _get_socket
May 15 01:22:46 main-app gunicorn(10779):     return socket.create_connection((host, port), timeout)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gevent/socket.py", line 96, in create_connection
May 15 01:22:46 main-app gunicorn(10779):     sock.connect(sa)
May 15 01:22:46 main-app gunicorn(10779):   File "/home/ubuntu/.virtualenvs/app/local/lib/python2.7/site-packages/gevent/_socket2.py", line 244, in connect
May 15 01:22:46 main-app gunicorn(10779):     raise error(err, strerror(err))

That is pretty much all I have. I can't put together what could be going on at all. It happens almost once a day now. Can an expert shed some light on this? Thanks in advance!


Note: If justified, SELECT version(); Exploit:

PostgreSQL 9.6.16 on x86_64-pc-linux-gnu (Ubuntu 9.6.16-1.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit