Dialogue – How Can Reviewers Understand the Consequences of Choosing a Comment in Stack Exchange's Inferior Review?

Summary

How to improve the low-quality Queck Queue Queue Selection UI user interface to better inform reviewers of the real impact of their choice.

Current situation

The current UI encountered by reviewers who have decided to delete a post in the low quality queue on Stack Exchange is as follows:

  1. You will see the post and you will be able to make the actual selection, ie. H. Whether you want to delete them, edit them, etc .:

    Main interface of low quality queue

  2. Reviewers who choose to delete will receive a selection of comments to leave on their behalf:

    Selection of low quality comments in the queue

The problem

Many reviewers never or never decide for that No comment needed Option, even though there is already a comment that expresses the same thing in the post (which makes the new comment redundant) or none of the comments apply. Please assume that this problem is given for this question. If you want to debate about it, here is a question about Meta SE, where it would probably be well placed.

As far as I can tell, the reason is a mixture of:

  • A variation in banner blindness that causes reviewers not to (re) read the header of the commenting dialog.

  • Users are of the opinion that their choice in the dialogue with canned comments is more than just leaving a comment, eg. B. that they contribute to statistics or influence what happens to the contribution (eg. That's a comment Actually do a comment conversion).

  • The users think that No comment needed is not the right thing and can be held against it. (This is not completely unjustified, since it is actually true if one of the ready-made comments is just right and no comment is available yet.)

  • The UI focuses on the reason for the deletion (for example, "This is a thank you comment."), Not the actually selected comment.

My question

How can the UI be improved to make it clear that the canned comments are just canned comments and that it is not a good idea to leave a comment?

What I thought so far

The following options do not convince me, although at least the first one would be better than nothing:

  • One could add further explanations to the explanation No comment needed Option, such as For example, a text that explains, "This is a good choice if an existing comment already addresses the issues of the post." However, I fear that this will be adversely affected by the same banner blindness and ignored.

  • You could add another level of dialogue: once you've decided to delete, users will be presented with a dialog where they can only choose between them no comment, can comment, and possibly custom comment (Explains these choices) and presents them with only a selection of comments if they wish Comment from the tin The downside is that it is another dialog level and it is not clear which selection of comments can be made in the first level.

  • Remove the bold headers that indicate the reason for deletion (for example, "This is a thank you comment").
    This should significantly increase the effort of finding the right comment, if appropriate.

transactions – 18: bad-txns-input-consumed – Bitcoin Stack Exchange

I want sendRawtransactoin to be sent to the testnet. I received the error "18: bad-txns-input-consumed".
My input is not output in the testnet, help!

{
"Version 1",
"LockTime": "0",
"Vin": [
        {
            "TxId": "ed34d4d2b9953a4ef585a13454a00501783675752b640b8cc68235afe5495498",
            "Vout": "1",
            "ScriptSig": {
                "Asm": "3045022100aa4cfe7482c52e961b617658d8029b524f8fe2a33ae3a4acd1d4f46684e8703f022077db4cd5b21413761b44e5511ddb45a91f2cbff52537677c2703a6f1f2cea6e4[ALL] 02763d72701702a421c464eb3a8dad8bd653d5fc2bddd1b08f1563d14952fda861 ",
"Hex": "483045022100aa4cfe7482c52e961b617658d8029b524f8fe2a33ae3a4acd1d4f46684e8703f022077db4cd5b21413761b44e5511ddb45a91f2cbff52537677c2703a6f1f2cea6e4012102763d72701702a421c464eb3a8dad8bd653d5fc2bddd1b08f1563d14952fda861"
},
"CoinBase": zero,
"TxInWitness": null,
Sequence: 4294967295
},
{
TxId: 1838578c8d25a5febc846720fb0a41d2d8a7790ab78eb88ea34c8cf95456808a,
"Vout": "12",
"ScriptSig": {
"Asm": "30450221008f7f898041cc6d7f71f110d29bfef37b880e0bc3290cbe958ed8a8321f8d34a0022034ddcbc93f9470a92da47cc667d1afe1f913ef5ded3d23b1c013954543[ALL] 03c2c30029d7b3b386f335cfee7ee1973d2f835837a5bc74c57a6e2300b4e06c60 ",
"Hex": "4830450221008f7f898041cc6d7f71f110d29bfef37b880e0bc3290cbe958ed8a8321f8d34a0022034ddcbc93f9470a92da47cc667d1afe1f913ef5ded3d23b1c018639e1ea1558e012103c2c30029d7b3b386f335cfee7ee1973d2f835837a5bc74c57a6e2300b4e06c60"
},
"CoinBase": zero,
"TxInWitness": null,
Sequence: 4294967295
}
],
"Vout": [
        {
            "Value": 0.00000546,
            "N": 0,
            "ScriptPubKey": {
                "Asm": "OP_DUP OP_HASH160 967d51a069067fe51f61ffc68cb9a785da0232a0 OP_EQUALVERIFY OP_CHECKSIG",
                "Hex": "76a914967d51a069067fe51f61ffc68cb9a785da0232a088ac",
                "ReqSigs": 1,
                "Type": "pubkeyhash",
                "Addresses": [
                    "muEfn8o5UAe7unSgAos5kjfJyuP7bf7j6R"
                ]
            }
},
{
"Value": 0,
"N": 1,
"ScriptPubKey": {
"Asm": "OP_RETURN 6f6d6e6900000000000000000000000000000f4240",
"Hex": "6a146f6d6e69000000000000000000000000000f4240",
"ReqSigs": 0,
"Type": "nulldata",
"Addresses": null
}
},
{
"Value": 0,00013667
"N": 2,
"ScriptPubKey": {
"Asm": "OP_DUP OP_HASH160 4fe9fd815c8bcc5138b876f3f82056638a09893d OP_EQUALVERIFY OP_CHECKSIG",
"Hex": 76a9144fe9fd815c8bcc5138b876f3f82056638a09893d88ac,
"ReqSigs": 1,
"Type": "pubkeyhash",
"Addresses": [
                    "mnoVz8sgcofzGGdq54aqFUUtJBD99zXiYB"
                ]
            }
}
],
TxId: 3a66d9e4bcfc130dd93c691ba3c6c80a170f8d4344f30d46cced6da7c3daeb78
}

asd

Postgresql – TimescaleDB Performance Database Administrators Stack Exchange

I try to optimize some queries at a large table, but can not speed it up. I was wondering if it is possible to run TimescaleDB even faster?

Test results:




Select tracker => number (*) from clicks where click_at is between & # 39; 2019-04-15 00: 00: 00 & # 39; and & # 39; 2019-04-17 23: 59: 59 & # 39 ;;;
number
----------
31385884
(1st row)

Time: 2306.110 ms (00: 02.306)

When I try to group the results, it gets a little worse:




select tracker => time_bucket (& # 39; 1 day & # 39 ;, click_at) as ts, (*) count from clicks, where click_at is between & # 39; 2019-04-15 00: 00: 00 & # 39; and & # 39; 2019-04-17 23: 59: 59 & # 39; group after ts;
ts | number
------------------------ + ----------
2019-04-15 02: 00: 00 + 02 | 28855475
2019-04-14 02: 00: 00 + 02 | 2530409
(2 rows)

Time: 3453,420 ms (00: 03.453)

Both queries execute a parallel index or seq scan, depending on the block size.

It was expected that a piece has at least 100 million records.

Can you do it faster or not?

The clicks table structure is below.

Thank you in advance!

        
        
        
        Column | Type | Sorting | Nullable |
default
----------------------- + -------------------------- + ----------- + ---------- + ------
-------------------------------
id | bigint | | not null | NexTV
al (& # 39; clicks_id_seq1 & # 39;; regclass)
click_at | Timestamp with time zone | | not null | now()
hash_id | Characters vary | | not null |
offer_id | Integer | | not null |
affiliate_id | Integer | | not null |
affiliate_sub | Text | | |
affiliate_sub2 | Text | | |
affiliate_sub3 | Text | | |
affiliate_sub4 | Text | | |
affiliate_sub5 | Text | | |
Source | Text | | |
ip | inet | | |
country_iso | Characters vary | | |
Connection type | smallint | | |
asn | Integer | | |
long | Characters vary | | |
referer | Text | | |
device_types_id | Integer | | |
browsers_id | Integer | | |
offer_payout_types_id | Integer | | |
offer_urls_id | Integer | | |
offer_files_id | Integer | | |
affiliate_click_id | Text | | |
affiliate_unique1 | Text | | |
affiliate_unique2 | Text | | |
affiliate_unique3 | Text | | |
affiliate_unique4 | Text | | |
affiliate_unique5 | Text | | |
user_variables | Text | | |
devices_id | Integer | | |
ip_proxy | inet | | |
tracking_users_id | Integer | | |
Conversion status | smallint | | |
indices:
"clicks_affiliate_id_idx" btree (affiliate_id)
"clicks_click_at_idx" btree (click_at DESC)
"clicks_hash_id_idx" btree (hash_id)
"clicks_id_idx" btree (id)
"clicks_offer_id_idx" btree (offer_id)
"clicks_tracking_users_id_idx" btree (tracking_users_id)
Triggers:
ts_insert_blocker BEFORE INSERTING INCLUDE Click _ti for each series execution method
mescaledb_internal.insert_blocker ()

Derivation of the polygamma function – Mathematics Stack Exchange

Thank you for your response to the Mathematics Stack Exchange!

  • Please be safe! answer the question, Provide details and share your research!

But avoid

  • Ask for help, clarification or answers to other answers.
  • Make statements based on opinions; secure them with references or personal experiences.

Use MathJax to format equations. Mathjax reference.

For more information, see our tips for writing great answers.

Memory Interface Issue – Computer Science Stack Exchange

I've learned that the problem of memory connectivity is used when a memory with a smaller number of memory locations needs to be connected to a processor with more address lines.
For example, for connecting memory chips with a net memory of 8 KB to a CPU that can handle 64 KB.

Can someone please explain or explain what that is? Why can not I just connect a chip with a smaller memory size to a CPU with more address lines?

Project structure – technology stack for physics software

I have a project idea for a formula simulation software for physics with the ability to extend it with custom plugins. I'm just thinking about the tech stack to use.

When I think about the users and their background, I would say that this would be very useful for people in science, and in my experience, most of them know Python pretty well, which gives them the ability to write plug-ins in Python.

For the simulation part, I am currently thinking of 2D and 3D simulatins of different physics models. C ++ could be used, as there are several libraries in this area.

Therefore, my current outline is to use Python for the core interface, the UI, and the plugin interface. And to use C ++ for the simulation part and an interface library like pybind11 or similar for the connection.

Does anyone have experience with this type of software and has to make some input regarding the options of the tech stack?

Irreducible Polynomials in the R – Mathematics Stack Exchange

Thank you for your response to the Mathematics Stack Exchange!

  • Please be safe! answer the question, Provide details and share your research!

But avoid

  • Ask for help, clarification or answers to other answers.
  • Make statements based on opinions; secure them with references or personal experiences.

Use MathJax to format equations. Mathjax reference.

For more information, see our tips for writing great answers.

python – Simple Web Crawler – Code Review Stack Exchange

Hello, that's just my WebCrawler feature for small sites. I would like to know what I am doing wrong. I am grateful for all points

Import BS4 as BS
import urllib.request
of urllib.parse import urlparse, urljoin
Import pprint


Class page (object):
def __init __ (self, base_url, url):
self.url = url
self.base_url = base_url
def soup (self):
sauce = urllib.request.urlope (self.url) .read ()
return (bs.BeautifulSoup (sauce, & # 39; lxml & # 39;))
def title (self):
Soup = Auto Soup ()
Back supp.title.string
def left (self):
URLs = []
        Soup = Auto Soup ()
href = [i.get('href') for i in soup.findAll('a') ]
        left = [i for i in (list(map((lambda url : url if bool(urlparse(url).netloc) == True else urljoin (self.base_url, url)),href))) if i.startswith(self.base_url)]
        Return links
def map_page (self):
map = {self.url: {# title #: self.title (), links left #: set (self.links ())}}
Return the card

def site_map (base_url):
map_pages = {}
left_to_map = [base_url]

    def check_and_add (url):
If the URL does not exist in map_pages:
            [links_to_map.append(i) for i in Page(base_url,url).links()]
            (map_pages.update (Page (base_url, url) .map_page ()))
left_to_map.remove (url)
otherwise:
left_to_map.remove (url)

while left_to_map! = []:
url = left_to_map[0]
        check_and_add (url)

pprint.pprint (map_pages)