❓ASK – Will ADA dump after the launch of smart contracts? | Proxies-free

The cardano cryptocurrency has been gaining a lot of attention and there is increased acceptance of these coins as well moving forward. With the number of companies that are invested in this coin also being on the rise it seems that this has helped greatly in increasing the market cap of this crypto. Tomorrow it is expected that there will be a launching of the smart contract system for cardano, and there are many people that are concerned that the price of this coin will then drop, as there will be a massive dump in the price. Do you think that this will be the case? How do you expect the integration of smart contracts to affect the currency price of cardano and why do you think this?

dump – Write a text in the Notepad.exe and find that text in the memory (RAM)

For a research project, I want to write a text in the Notepad then search the whole RAM for that text.
I used WinDbg Previewand attaching Notepad to the debugger but I couldn’t figure out how it works.
Another solution is to dump the whole memory then search for the text with a hex editor, please help me if you have any suggestions.
Thanks in advance


postgresql – Dump all data to json files

I’m working on a script that has to dump app data from a schema to json files. Here is what I have thus far:

export SCHEMA="citydb"
export DB="kv_db"
export PGPASSWORD=root

psql -U postgres -Atc "select tablename from pg_catalog.pg_tables where schemaname='$SCHEMA'" $DB |
  while read TBL; do
    name=${TBL::-1} # for some reason a whitespace character is appended to the end of each name
    echo -e "Table name: $name";
    touch $name".json"
    psql -U postgres o $name".json"
    psql -U postgres -c "COPY (SELECT row_to_json(t) FROM $SCHEMA.$name as t) TO 'c:/users/me/desktop/$name.json'" $DB > $name.json

I’m pretty much building up on information about how to export to csv’s since I didn’t find any information about dumping all the data to json.

I’m getting an error that says:

psql: warning: extra command-line argument "plant_cover.json" ignored
psql: error: FATAL:  database "o" does not exist
ERROR:  could not open file "c:/users/me/desktop/plant_cover.json" for writing: Permission denied
HINT:  COPY TO instructs the PostgreSQL server process to write a file. You may want a client-side facility such as psql's copy.

Any ideas?

debugging – How to dump the input of a seccomp BPF filter

I am writing a program that creates BPF seccomp filters. These filters are supposed to check syscalls and their arguments against predefined allowed values. The logic to check the syscall by its number works as expected. However, the logic to filter the syscall arguments does not.

For debug purposes, is it possible to dump the input data of the filter program (seccomp_data) to see what it saw when it attempted to filter the syscall?

If that is not possible, is there another way to debug a raw BPF seccomp filter?

I know that libseccomp exists but this is an independent implementation.

php – Contact Form 7 show API response with Var Dump?

I have created a custom function to attach to Contact form 7, where that on submission it is set to fire off an API string and populate segments of the string with the recorded data, I want to debug and see what response I am receiving from API server, I have tried using Var Dump to show the response but doesn’t throw the response where I can see it? What am I doing wrong or not doing here?


 function test_cf7_api_sender ( $contact_form) {

     if ( $contact_form->title === 'Quote_form')
        $submission = WPCF7_Submission::get_instance();

        if ($submission) 
            $posted_data = $submission->get_posted_data();

            $name = $posted_data("your-name");
            $surname = $posted_data("your-name2");
            $phone = $posted_data("your-phone");
            $url = "http.test.com/API?name=$name&surname=$surname&phone=$phone";

              wp_remote_post ($url);
              $response = wp_remote_get ($url);


backup – MySQL Dump only views, triggers, events and procedures

how are you?

I need to generate a file with all triggers, events, procedures from a server’s databases and tried the two commands below

  • mysqldump -u root -p –all-databases –host= –no-data
    –no-create-db –no-create-info –routines –triggers –skip-comments — skip-opt –default-character-set=utf8 -P3306 > E:db_objects_no_create.sql

    mysqldump -u root -p –all-databases –host= –no-data
    –no-create-db –routines –triggers –skip-comments –skip-opt –default- character-set=utf8 -P3306 > E:db_objects.sql

In the first one, the output of the file does not extract the creation codes correctly, they are commented out.

In the second the codes are done correctly however the information of the table creation code is also exported.

Does anyone know how to do this export without the table creation code being generated in the output file?

how to import the binary dump sql in MySQL 5.7

Today I received a binary sql dump script efs_bbs.sql(MySQL 5.7),then I open it with text editor, it looks like this:

b#eHE-SafeNetLOCKΩA®@¨g%;ägWnímã˛~Q5åÔÔÃÄÜ)hT(ı=≥ùOf2Æ  ÔCf‹flo
î≈ª«SlC¢%ïDúi*(âVfiNfÛpEͶ«µC‡4∞ä'Ÿ~($e „˘§ÂøÔ`„ÿÆ-=iU+)ÉÒgI≥G‡)ÕfiˇÉ4¡ÜSå≠3îxçœsùòuõB˘–Ö%O$Uzòo€Œ$Aaç≤GòθÆ/Zz‡’
tômj0Ò¶óÕw∑>Ô¡éüÁ±‘¡•J≥∞˘ù#Ìi   «RòUîÙJ€“R˘EÎbÍWõ+Jº$i7+.±;

what should I do to import the sql to my database? I have tried this command:

mysql: (Warning) Using a password on the command line interface can be insecure.
ERROR at line 1: Unknown command '?'.

enter image description here

any other way to import? Am I doing wrong way?

forensics – Memory dump analysis

I have received a file called memory.dump (1GB) which is a dump file format that I’ve never come across.
I usually use volatility and for most cases, it worked just fine for FDA but in this case, it just doesn’t do anything and just this ‘No suitable address space mapping found’ message pops up.

Thanks for any advice.
And I’m very grateful for your time.

postgresql – Restoring a dump of a schema (made with the -n option of pg_dump) of a postgis database; schema “public” already exists

While trying to restore a PostgreSQL dump that has been made using the -n public option of pg_dump (see:
Postgis extension doesn’t seem to be taken into account when restoring a PostGIS database: ERROR: type “public.geometry” does not exist) I now begin to understand that it’s mandatory for me to create the database and the necessary extension(s) manually before restoring it.

It works almost fine, because now the postgis extension is there (it’s no more complaining about it) when this restore begins:


(One thing to note here, is that the database name inside the dump file is mydatabase_prod, so, I have to set up --dbname=mydatabase_dev to match the name I need for development purposes.)

But this also means I cannot use the --clean and --create options of pg_restore because they will obviously cancel what I manually prepared. It’s not a big problem because I restore this dump from scratch inside a fresh postgis docker container. But…

But now, I am facing this issue:

db_1   | pg_restore: connecting to database for restore
db_1   | pg_restore: creating SCHEMA "public"
db_1   | 2021-07-04 17:14:28.123 UTC (185) ERROR:  schema "public" already exists
db_1   | 2021-07-04 17:14:28.123 UTC (185) STATEMENT:  CREATE SCHEMA public;    
db_1   | pg_restore: while PROCESSING TOC:
db_1   | pg_restore: from TOC entry 28; 2615 26623 SCHEMA public postgres
db_1   | pg_restore: error: could not execute query: ERROR:  schema "public" already exists
db_1   | Command was: CREATE SCHEMA public;

(...restoring all tables)

db_1   | pg_restore: warning: errors ignored on restore: 1
db_1   | 
db_1   | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1   | 
db_1   | 2021-07-04 17:18:25.043 UTC (1) LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1   | 2021-07-04 17:18:25.044 UTC (1) LOG:  listening on IPv4 address "", port 5432
db_1   | 2021-07-04 17:18:25.045 UTC (1) LOG:  listening on IPv6 address "::", port 5432
db_1   | 2021-07-04 17:18:25.051 UTC (1) LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1   | 2021-07-04 17:18:25.058 UTC (27) LOG:  database system was interrupted; last known up at 2021-07-04 17:17:10 UTC
db_1   | 2021-07-04 17:18:25.087 UTC (27) LOG:  database system was not properly shut down; automatic recovery in progress
db_1   | 2021-07-04 17:18:25.091 UTC (27) LOG:  redo starts at 0/F720C0B8
db_1   | 2021-07-04 17:18:25.117 UTC (28) LOG:  incomplete startup packet

This didn’t seem to be an issue at first glance because plenty of lines descrbing the tables being actually restored are displayed on the console… Until I noticed that every commands following this pg_restore one in my bash script are simply and purely not executed. As if the database “bugged” and restarted at this precise point, prior to all other commands that follow pg_restore. Also, if I add the flag --exit-on-error to pg_restore it simply stops after the first error is met (i.e.: schema "public" already exists).

Therefore, I’m stuck. I have to do with that dump file made using the -n public option, so I have to create the database and install postgis prior to the restore in order to avoid the linked issue, and on the other hand, if I want the public schema not to be there in order to avoid the schema "public" already exists error, it seems I do need to start from a fresh database, either by using --clean --create options when using pg_restore, or by breaking the restore into two commands as explained here; https://dba.stackexchange.com/a/90319/196371. But both solutions would drop the postgis extension in return, which will make me go back to the first error…

I cannot see any issue to this snake-biting-its-tail situation…

One first option would be to tell pg_dump to also dump the extensions, which is currently not possible, or to ‘ignore’ the fact that a schema does already exist while restoring a dump that has been made with the -n flag on this precise schema (I didn’t found such ‘ignore schema’ option in the doc but that would be great).

Any advice at this point? Did I miss something (I hope so)? Should I look for a full database dump (which I may not get)? …