google sheets – How to embed table from gSheets to gDoc and preserve formulas & cell protection

I’m created a google doc posted to the Canvas platform for individual students to record data from an experiment. (see below for screenshot img)

In excel, I can easily (1) insert AVERAGE formula for each row of data (2) protect all cells except for those where students entered data.

I need to embed that same data entry table from gSheets in a gDoc. However, when I copy the selected table cells from the google sheet, neither the AVERAGE formulas not the cell protection copy over into the gSheet.

Neither “paste table with link” to gSheet nor “paste unlinked” produces the desired results of (a) preserving the AVERAGE formula (b) cell protection. I can do without cell protection, but must have the average formula working.

Thank you!

(Data entry table gsheet to embed in gdoc

mysql – insert into table with default values from a select statement in php

this is the sql code i have written

$course_name = "Data Mining and Ware Housing";

$table_name = "new_table";

$t_id = 't1234';

$time = time();

$c_id = 'cs402';

$sql1 = "  INSERT INTO " . $table_name . "(st_id,t_id,date,status)
        SELECT s_id, '" . $t_id . "', '" . $time . "'  ,'0' 
        FROM student_courses WHERE course1= '" . $c_id . "'  OR course2= '" . $c_id . "'  OR course3= '" . $c_id . "' OR course4='" . $c_id . "';";

but the details are not getting entered

new_table(id,st_id,t_id ,date ,status)

How can I find which table belongs to which two users in a MySQL database that stores user messages?

I’m building a chat application in which I want to store user messages in a MySQL database. I have came up with the solution of creating a seperate database for messages and creating tables for each conversation between users. One example table would look like this:

| Field           | Type            | Null | Key | Default | Extra          |
| message_count   | bigint unsigned | NO   | PRI | NULL    | auto_increment |
| message_content | varchar(2000)   | YES  |     | NULL    |                |
| sent_by         | varchar(32)     | YES  |     | NULL    |                |
| sent_at         | datetime        | YES  |     | NULL    |                |

But how would I be able to figure out which table should I load for a given two users? I could store the usernames of both participants of the chat room in a column named “participants” seperated by a whitespace and use a query that looks like SELECT FROM chats WHERE participants LIKE username, but that does not sound like a healthy solution at all.

How to make table with horizontal scrolling with tailwindcss 2.1?

I want to make table with horizontal scrolling for its content and
I try to use whitespace-nowrap class for table cells which have long content and
overflow-x-auto for all the table, like:

  <div class="editor_listing_wrapper_bix_width">

  <table class=" overflow-x-auto p-1 m-1 d2">
  <thead class="bg-gray-700 border-b-2 border-t-2 border-gray-300">
  <th class="w-1/12 py-2">Id</th>
  <th class="w-4/12 py-2">Name</th>
  <th class="w-4/12 py-2">Description</th>
  <th class="w-1/12 py-2"></th>


  <td class="whitespace-nowrap">
  <small class="pl-2 pt-1">
  ( Used in 2 ad(s) )

  <td class="whitespace-nowrap p-1">Laptops description Lorem ipsum dolor sit amet,
  adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna


But looks like horizontal scrolling is for all area, not for my table

Pls, take look at pen:

I use tailwindcss 2.1.0.



Does SQL Server upgrade to table locks at total locks or session locks?

I’ve a table that both has mass sequential inserts at the end of the CI and random (very distributed) reads+updates. Naturally, the mass inserts should not block the random access. RCSI is used, so the read-only queries shouldn’t affect the lock count (?) in relation to the sequential insert.

My concern is that, even when limiting the maximum number of locks taken during the insert (eg. inserting in batches), it is possible for one (or more) of the OLTP updates to bypass this limit. If the lock count heuristic is per-session then it is less of potential issue.

Given the answer to the question in the title, then, what is the “best” way to prevent table lock escalation here?

My current approach/thought is to select a row count (eg. arbitrary of 1-4k) during the mass inserts to allow “some slack”, although this feels overall imprecise. While batches are essential anyway to deal with replication and such, it would be nice to specify a batch size of 5k rows and move on. (To be fair, quick table locks aren’t really the issue: the intent of the question is more about finding the edge such that table lock escalation doesn’t happen.)

There has been DBA pushback on both 1) disabling row locks (to ensure page locks and thus reduce lock courts) and 2) disabling table lock escalation (with forced page locks to minimize worst-case). Are there any other relevant database properties to consider with respect to lock escalation? (Increasing the lock limit to say, 10k would then allow a
much larger “slack” batch size.)

mysql – Is there a faster way to load (not import, load) a preexisting table into the form editor of Workbench?

I am slowly adding data to one of the tables in a MySQL database via Workbench. In order to enter data into the table I click on the icon at the end of the table name in the list of tables in the Schema. The data loads eventually, but it’s taking longer and longer as the size of the table increases. At present it takes 3 minutes to load and I have entered less than 10% of the data that will eventually comprise the table.

Is there a faster way to load that data into the editor or am I stuck with waiting each time I want to add some data? All of the help I have found on the web relates to “importing” data into a database. I simply wonder if there isn’t a faster way to load a table.

Thanks for any help.

John G

sql server – How to create table from two alias tables with sql

I want to create a table that shows a name, position week1, position week2. I created the following queries that give the correct result for each week separately.
All the data is in one table results.
Server version: 10.4.14-MariaDB

    name, team, points,week, 
    @curRank := @curRank + 1 AS position
    FROM results,  (SELECT @curRank := 0) r
    WHERE week = 1
    order by points DESC


    name, team, points,week, 
    @prevRank := @prevRank + 1 AS position2
    FROM results,  (SELECT @prevRank := 0) r2
    WHERE week = 2

But when I combine the with UNION I get an incorrect result.

    name, team, points,week, 
    @curRank := @curRank + 1 AS position
    FROM results,  (SELECT @curRank := 0) r
    WHERE week = 1)
    name, team, points,week, 
    @secondRank := @secondRank + 1 AS position2
    FROM results,  (SELECT @secondRank := 0) r2
    WHERE week = 2)
ORDER BY points

So how would I combine the two select statements to get the table with just name, position week1, position week2? I do not need points in the table, but points are used to calculate the position.

How to represent a list of entities within a table of the same entity in PostgreSQL?

There’s a couple of ways you can go about this but the most relational and normalized way would be to create a second table called UserFriendList with the columns UserId and FriendUserId which would store one row per Friend for each User. This table would be one-to-many from User.Id to UserFriendList.UserId but would also be able to help bridge the join back to the User table on UserFriendList.FriendUserId to User.Id to get all the User attributes of the friends. This kind of table is known as a bridge / junction / linking table.

Alternatively you can store the FriendList column directly on the User table as either a comma delimited list or in JSON, but these are both denormalized solutions, which will become harder to maintain changes, potentially lead to data redundancy, and will inflate the size of your User table which could make querying it less efficient.

list manipulation – generate statistical table to complete and then review,How can I generate this exact table?

Good morning everyone

I want to generate a data table model with intervals (between 5 and 15) following Sturges rule
I want to generate a data table model with intervals (between 5 and 15) following the Sturges rule, for school purposes where minimum and maximum random entries between 0 and 999 are generated and distributed in intervals with equal amplitudes and closed on the left and open on the right ( a—-b( and that gives me the frequency for each interval.and all this put it in a table to be filled by the student, the idea is that in each program run a different one is generated.
and below the solution of the solved table, where xi=class mark ,fr = relative frequency ,Fi= cumulative frequency Fr% = cumulative frequency percentage and fr% = relative frequency percentage and also LI—-LS lower and upper limits of each interval.
I would appreciate it very much
I attach an image of the idea

Translated with (free version)enter image description here