HIVE SQL IF ELSE statement, then create different tables

Here is the logic I want:

if hour(CURRENT_TIMESTAMP) % 2 = 0
    THEN create table table_1 AS
    **same select statement**

else if hour(CURRENT_TIMESTAMP) % 2 = 1
    THEN create table table_2 AS
    **same select statement**

If the hour is an odd / even number, the name of the build table differs. The select statement is exactly the same.

How do you do that?

php – how to add the values ​​of the two tables using the SUM function

I have the following tables:

historicoEntrada {
id
data_op
description
value
entry_user_id

}}

historicoSaida {
id
data_s
description1
value1
id_usuario_saida

}}

User {
id
Surname
cpf
password
}}

I wanted to add the VALUE fields to the two tables. How about the SELECT?

An SQL script that creates mer-compliant tables

System users require registration, and it is necessary to save their personal information and log in to the environment. Furthermore

As part of the main functionality of the system, any user can specify locations of possible dengue outbreaks, send photos, locations, etc. Therefore we need a database to store all this data safely and efficiently. His task then is to create an initial needs overview of the types of information that are stored in the bank and to create an entity relationship model (MER) that describes how this information is related within the bank become. Include entities, relationships (with their respective cardinalities) and attributes in the MER.

In a second step after creating the MER, create a simple SQL script to create all the necessary tables according to the MER

Hash tables – need help adding elements to the hash table with linear check

Here's an example problem that I find difficult to figure out. The red text is the answer.

Enter the image description here

I understand how to add the values ​​before resizing the hash table … that's common sense. (Insert 0 in index 3, 5 in index 1, etc.)

When the size of the table is changed, each element has a new position. HOW is the new index 0 of 1? HOW is the new index 7 out of 5? How was each element of the array assigned the new index when the table size was changed?

Any help would be appreciated.

Thank you very much

Data tables – advanced search with limited time range

I design a data table page from a CRM system with a complicated advanced search (up to 10 fields).

The table shows information from a very large amount of data. To avoid losing data, users can only select data from a limited period of time (30 days).

The scenario is that we also allow the user to search for data with a unique ID (exact match). If so, all other search settings, including the time limit, are not required. Our solution is now to invalidate other search settings in the backstage area for the user type something in the ID field and click on Search. But I'm pretty sure that this solution violates several usability rules.

How can I?

Javascript – dropdown filter in data tables on rails

I created the model example (name, status (Boolean)) on rails. I have created the data table that contains the dropdown filter for the 2nd column. If I now click on the dropdown values, an error is displayed (DataTables warning: table ID = sample data table – Ajax error).# Received errors in this line: format.json {render json: ExampleDatatable.new (params)} (NotImplementedError in ExamplesController # index)
index.html.erb:

Name status

examples_controller.rb:

class ExamplesController < ApplicationController

def index
  respond_to do |format|
  format.html
  format.json { render json: ExampleDatatable.new(params) } #Getting         error on this line: (NotImplementedError in ExamplesController#index )
end
end 

example_datatable.rb

class ExampleDatatable < AjaxDatatablesRails::ActiveRecord
 def view_columns
 @view_columns ||= {
 name: { source: "Example.name"},
 status: { source: "Example.status"}
 }
end

def data
 records.map do |record|
  {
 # example:
  name: record.name,
  status: record.status
  }
 end
end

def get_raw_records
Beispiel.all
The End
The End

javascript:

jQuery(document).ready(function() {
  $('#examples-datatable').dataTable({
  "processing": true,
  "serverSide": true,
  "ajax": {
 "url": $('#examples-datatable').data('source')
   },
 "method": "GET",
 "pagingType": "full_numbers",
 "columns": (
 {"data": "name"},
 {"data": "status"}
    ),
  initComplete: function () {
   this.api().columns((1)).every( function () {
    var column = this;
    var select = $('')
    .appendTo( $(column.footer()).empty() )
    .on( 'change', function () {
          var val = $.fn.dataTable.util.escapeRegex(
          $(this).val()
           );

            column
                .search( val ? '^'+val+'$' : '', true, false )
                 .draw();
              } );

              column.data().unique().sort().each( function ( d, j ) {
                 select.append( ''   )
                   } );
                 } );
              }
   });
 });

Please help me fix this.

magento2 – Flat catalog tables have been considered bad practice since M2.1.x and higher?

Magento no longer recommends using a flat catalog as best practice. Continued use of this feature is known to result in performance degradation and other indexing problems. A detailed description and solution can be found in the help.

Affected versions are:

  • Magento Commerce Cloud 2.1.x and higher
  • Magento Commerce (on-site) 2.1.x and higher
  • Magento Open Source 2.1.x and higher

Source: https://docs.magento.com/m2/ce/user_guide/catalog/catalog-flat.html

The above link points to another help page: https://support.magento.com/hc/en-us/articles/360034631192

Problem Flat indexers can cause:

Severe problems with SQL utilization and site performance. Run and stick for a long time
crons.

Why is this proven method reversed for large catalogs? Are the problems with high SQL utilization and site performance new? Or is it a risk that was always there?
Have there been any changes to this topic in the code base since 2.1.x and higher?

sql – logic error when trying to join three tables on the same key

I take three tables, put them all together, and then try to determine if one table contains values ​​that are not in any of the others.

For example, there is a table A, a table B and a table C with only one column each. value 10 is in the column in Table A and Table C, but not in Table B. The logic for my join is to first connect TableA and TableB and then connect the result to TableC. The join does not work in this particular scenario. The join can try to match the column that Table A or Table B went through, but I'm not sure how to check both.

In the example above, I have the following:

+------------------+------------------+------------------+
| columnFromTableA | columnFromTableB | columnFromTableC |
+------------------+------------------+------------------+
| 10               | NULL             | NULL             |
| NULL             | NULL             | 10               |
+------------------+------------------+------------------+

Since I connected TableA and TableB for the first time, there was no matching value in TableB. When I try to join TableC fromTableBthere is no match, even though there is a match in fromTableA, How do I fix this logic error?

What are the approaches to updating materialized views in Oracle when underlying tables are updated frequently?

I am a web developer and maintain a web app that tracks orders, customers, products, etc. that my customer uses internally. I'm using Oracle 12c, which is hosted on AWS RDS. My client has just switched some other systems so we're at a point where the data structures have changed and I'm using a new schema in Oracle to store new data in the new structures.

So that the web app does not have to be revised to work with new data structures, it was decided to implement materialized views in Oracle that combine the new data from the new schema (manipulated into the "legacy structure"). along with the legacy data.

Now I have to take care of updating these materialized views so that the web app can always access the latest data. Ideally, the relevant materialized views will be updated when I get a new record in the new schema. However, I may get new data every few seconds during working hours. A compromise is fine – if the materialized views are out of date by a few minutes (maybe 5 or (less ideally) 10 minutes), this can be an acceptable situation.

My question is, what approach should I take to refresh these materialized views? I don't want to overload Oracle with constant updates, and the web app should give users a good user experience reading / writing data from / to Oracle. I am far from being an Oracle / DB expert, so I'm not sure what options are available. I think I could just have a cron job that runs every 5 minutes or something to refresh outdated materialized views one by one, but I wonder if this approach is a bit naive.

In reality, I'm dealing with 14 materialized views (for now), and my tests take up to 2.5 minutes for some of them to complete a full update.