Database – lock an item while it is waiting to be approved

I'm working to add approval workflows in our system that manage CRUD operations for our business element.
So far, I've used an optimistic locking strategy to control race conditions when two users try to edit the same item. The user whose request is first saved wins and the other user must try again with the new value.
However, this does not work very well once we start soliciting approvals because an approval workflow can take several days and may require the approval of 4-5 people. If I serialize the approved requests (in the order in which they were approved) and apply them one at a time with Optimistic Locking, the user who lost the race is not happy because he jumped through so many tires in vain.
One strategy is to pessimize the element before invoking the approval workflow. However, this seems problematic for two reasons:

  1. An item may be locked for days, resulting in it
    Frustration for other users. (I could avoid that by putting a
    Timeout for lock and attribute-level locks for minimization
  2. Another of our use cases is Bulk Edit, where I would
    may need to check if locks on 100 items are enabled
    This can increase latency and decrease performance.

Most of the questions I've seen in this forum are about using two tables (one is the actual table and one for pending changes), not the management of locks. eg: pending and approval process
Conversely, there are questions about blocking, but without time-consuming approval processes. For example, should I lock lines in my cloud database while they are being edited by a user?

I think this issue is likely to be fairly common as many systems use permissions to restrict data manipulation. What are some of the common ways to solve this problem?

Thank you so much!

mysql – can someone answer these in the new one to the database

one. Enter the first and last name of all employees earning over 27,000 salary.

b. Provide the first and last name and address of all employees who have a son as a dependent, but no daughter as a dependent.

c. List each employee's full name (first name, middle name, and last name) in a single column named "Employees" and sort employees alphabetically by last name. If the employee has a supervisor, specify the supervisor's full name in a single column named "Supervisor."

d. Indicate the name and total working time of all projects in Houston involving more than two employees.

e. List the SSN, first name, middle first letter and surname of all employees working in the department with the highest salary among all employees.

f. List the project name and department managing the project for all projects where supervisors work for at least 20 hours.

G. Enter the name of the department whose employees have the most dependent persons.

H. List the locations of the department where the projects take the least hours.Departmental database
Works_on database

database – An attempt to create a data frame from another created with the merge feature

Good day from the Canary Islands. I need a database containing the variables (columns) of 2 other databases, in particular the national budget health survey (MICRODAT_CH) and the National Adult Health Survey (MICRODAT_CA).
As I have already explained, I would like to compile these two databases of both surveys and then stick to certain variables.
To connect both databases, I create a new one with the merge function and connect the data with the IDENTHOGAR variable

at try it Create the data frame (DATA1) with the variables that I need MICRODAT_CA_CHI get the following error:

mistake in (.data.frame(MICRODAT_CA_CH ,, c ("CCAA", "IDENTHOGAR", "ESTRATO",:
undefined columns selected.

How do I get the data frame I need? Are there any other functions or arguments within the merge function?


Actually, I need a lot more variables, but no matter if I input only a few or all of them in DATA1, the same error appears.

Is there any content for WordPress in the database that I can ignore?

I'm exploring ways to minimize the size of the database dump (to automatically back up the WordPress content) of multiple WordPress instances.

I wonder, first of all, if there are any generated Data stored in the database that I can safely discard for backups because they are regenerated by WordPress (or by an action that can be performed) manually performed after a backup). For example, it is usually safe for phpBB3 to discard search terms and the like because you can easily reindex after a disaster recovery …

Oh, I should probably add that I read this and did not derive any method to minimize the dump size (apart from --compact and --skip-comments Command line changes mysqldump).

FullCalendar does not display my database events

I create a calendar with FullCalendar and can not view the events that I have stored in the database. The connection to the database is correct because I can see all events in the database when entering the following URL in the browser (http: //localhost/miproyecto/templates/eventos.php), but when I pass this address to events, they does not show me anything.

I leave the code here:


                left: 'prev,next today nueva',
                center: 'title',
                right: 'month,agendaWeek,agendaDay'

                    text:" + ",
            dayClick: function(date, jsEvent, view) {

            selectable: true,
            eventClick: function(calEvent, jsEvent, view) 

                FechaHoraIni=calEvent.start._i.split(" ");


                FechaHoraFin=calEvent.start._i.split(" ");






This is the result that Json returns to me from the data in my database:

({"id":"3","title":"Clase 1","profesor":"Carla","alumno":"Andrea","start":"2019-10-16 09:00:00","end":"2019-10-16 10:00:00","permiso":"B","color":"#ed8128"},{"id":"4","title":"Clase 2","profesor":"Raquel","alumno":"Paula","start":"2019-10-17 08:00:00","end":"2019-10-17 09:00:00","permiso":"A1","color":"#ed8128"},{"id":"6","title":"clase4","profesor":"Raquel","alumno":"Rosa","start":"2019-10-18 12:30:00","end":"2019-10-18 13:30:00","permiso":"B","color":"#ed8128"})

AlwaysOn database is not synchronized / restored due to cluster loss quorum

I've got this error in cluster event "Cluster node" MMPDB1NEW "has been removed from the active failover cluster membership.This cluster service might have stopped on this node, which could also be because the node is communicating with other active nodes in the failover cluster. & # 39; Now my cluster node is online and the quorum is healthy, but my database on node 2 does not show synchronization / recovery and can not be executed on ALwayson health events See & # 39; ; A connection for the availability group & # 39; MTCBAG & quot; from the availability replica & # 39; node1 & # 39; with the ID … to & # 39; node 2 & # 39; with the ID … has been successfully established Do I have to wait for the sync to recover, or does something need to be done to restore my secondary node?

Solution for deleting old database backups for Linux SQL Server 2017 after backups made with Ola Hallengren scripts?

I have a problem with deleting older backups I created with Ola Halengreen scripts.

USE Maintenance
EXECUTE dbo.DatabaseBackup
 @Databases = 'USER_DATABASES',
 @Directory ='/mssql/backup/',
 @DirectoryStructure ='${InstanceName}{DirectorySeparator}{DatabaseName}',
 @BackupType = 'FULL',
 @Compress = 'Y',
 @CleanupTime = 3

I received the following error message:

The value for the @CleanupTime parameter is not supported. Cleanup is not supported on Linux.

So far so good, but what is the best way to delete old backups without destroying the backup chain?

My first thought was to have a script to delete

find /mssql/backup -name "*.bak" -type f -mtime +3 -exec rm -f {} ;
find /mssql/backup -name "*.trn" -type f -mtime +4 -exec rm -f {} ;

The internal system of the SQL Server instance does not recognize the cleanup after the script is run. The backup history in the internal backup management is not updated and in case of a restore the system wants backups that no longer exist.

Does anyone have a solution for the problem?

python – Problems opening the database with QSqlDatabase in PyCharm

I'm doing a little program to register Python products and the database just does not open

I tried to open with the sqlite3 library, and on suggestion with QSqlDatabase, but nothing works. What could this mistake be? How to solve and connect?

db = QSqlDatabase.database("QSQLITE")
if not
   print(db.lastError().text(), db.drivers())


C:UsersDanielAppDataLocalProgramsPythonPython37pythonw.exe "C:/Users/Daniel/Desktop/Sistema NaruHodo/"
Driver not loaded Driver not loaded ('QSQLITE', 'QODBC', 'QODBC3', 'QPSQL', 'QPSQL7')

Process finished with exit code 0

Database – Limit the addition of additional administrator accounts

To limit the number of WP site administrators to 2. If the current WP administrator wants to create a new administrator, the old account must be deleted. (This can also be added at the database level)

I need this feature to prevent more administrator accounts and attacks related to such actions from being created.

Is there a plugin that supports this kind of restrictions, or do I need custom code? Or does a simple database procedure have to be added?

Server – The cheapest way to use a MySQL database on Microsoft Azure

I am currently evaluating features of Microsoft Azure and would like to test their Azure database for MySQL service in a small test project. I just need a very simple database with minimal power and no backups, etc. However, prices seem to be north of $ 200 / month.

Currently, I find it difficult to determine prices for a MySql or MariaDb database. Prices also seem to depend on how you try to get them:

Do I really need a full database server (with dedicated processor cores) and a database? Or is there a cheaper option to do something like a serverless MySql database or out of a container?

Note: The wording of this question is based on feedback from Are Azure Pricing Questions Not Relevant?