adminhtml – Magento admin ajax call file upload

I’m trying to make a ajax call in admin page to upload a csv file in directory then to make a stock update function.

In my PHTML:

<form method="post" enctype="multipart/form-data" id="ajaxCall" >
<input name="form_key"  type="hidden" value="<?php /* @escapeNotVerified */ echo $block->getFormKey() ?>"  />
<span class="file-uploader-button action-default">Upload CSV file:</span> <input id="image_to_upload" type="file" name="file" required />
<br/>
<input type="submit" id="import"/>
</form>
<script>

require((
    "jquery"
), function ($) {
     //your code to send ajax request here
     $.noConflict();    
    formdata = new FormData();      
    $("#image_to_upload").on("change", function() {
        alert("hello")
        var file = this.files(0);
        if (formdata) {
            formdata.append("image", file);
            $.ajax({
                url: "admin/grid/index/index",
                type: "POST",
                data: {dat:formdata,form_key: window.FORM_KEY},

                success:function(request){
                    console.log("success",request)
                },
                error:function (request, status, error) {
                    alert(request.responseText);
                }
            });
        }                       
    });   
});

</script>

When I click trigger this function I get below error.

jquery.js:10079 Uncaught TypeError: Illegal invocation

I have this script in the phtml which belongs to the same controller where I post using ajax because I have to show product update progress in the same page. In that controller I will get data like below if ajax works fine. Please note this is the previous working code in my phtml. I want to edit this code for controller standards.

<?php 
    $objectManager = MagentoFrameworkAppObjectManager::getInstance ();
    $fileSystem = $objectManager->create('MagentoFrameworkFilesystem');
    $mediaPath = $fileSystem->getDirectoryRead(MagentoFrameworkAppFilesystemDirectoryList::MEDIA)->getAbsolutePath();
    $stockRegistry = $objectManager->create('MagentoCatalogInventoryApiStockRegistryInterface');
    $product = $objectManager->get('MagentoCatalogModelProduct');
if(isset($_POST('submit'))){
    if (!file_exists($mediaPath.'csv')) {
        mkdir($mediaPath.'csv', 0777, true);
    }
    $file_type = $_FILES('file')('type'); //returns the mimetype
    $allowed = array('text/csv');
    if(!in_array($file_type, $allowed)) {?>
    <h1 class="error">Only CSV files allowed</h1>
    <tr>
        <td>Wrong format</td>
        <td>Wrong format</td>
        <td>Wrong format</td>
    </tr>
    <?php
    }else{    
    $displayFlag = 1;
    $csv =  $_FILES('file');    
    $targetdir = $mediaPath;   
    $image_name=$_FILES('file')('name');
    $temp = explode(".", $image_name);
    $newfilename = round(microtime(true)) . '.' . end($temp);
    $imagepath=$mediaPath."csv/".$image_name;
    if(move_uploaded_file($_FILES("file")("tmp_name"),$imagepath)){
        $csvFile = file($imagepath);
        $data = ();
        foreach ($csvFile as $line) {
            $data() = str_getcsv($line, ",", '"');
        }
        var_dump( count($data));
        $keys = ();
        $result = ();
        foreach($data as $key => $value){
            if($key == 0){
                $keys = $value;
            }            
        }
        foreach($data as $key => $value){
            if($key !== 0){
                $result() = array_combine($keys, $value);
            }            
        }
        foreach($result as $key => $value){
            if($product->getIdBySku($value('sku'))) {
                $stockItem = $stockRegistry->getStockItemBySku($value('sku'));
                $stockItem->setQty($value('qty'));
                $sku = $value('sku');            
                if($stockRegistry->updateStockItemBySku($sku, $stockItem)){                
                    ?>
                    <tr>
                        <td><?php echo $value('sku') ?></td>
                        <td><?php echo $value('qty') ?></td>
                        <td>Updated</td>
                    </tr>
                <?php
                }else{?>
                    <tr>
                        <td><?php echo $value('sku') ?></td>
                        <td><?php echo $value('qty') ?></td>
                        <td>Updated</td>
                    </tr>
                    <?php
                }
            }else{ ?>
                 <tr>
                        <td><?php echo $value('sku') ?></td>
                        <td><?php echo $value('qty') ?></td>
                        <td>Sku Not found</td>
                    </tr>
            <?php
            }
        }
    }
}
}
?>
</table>

file recovery – Open a (recovered) Microsoft Word document from 14 years ago

I have a Microsoft Word document that I wrote 14 years ago (in 2006), it is text-only.
I do not remember which version of Microsoft Word I was using to write it, but the OS was Windows XP (in a different language that my current PC), and the file extension is “.doc”.
I changed computers several times since then, but always kept that file in some HDDs.

Now I want to read this document again.
I plugged my external HDD to my current computer (Windows 10) ; and Explorer tells me that the file type is “Microsoft Word 97 – 2003 Document”.
On my PC I have the “Word App” installed (not sure which version that means ; it is installed in “C:Program FilesMicrosoft OfficerootOffice16WINWORD.EXE”).

When I try to open my .doc file with this version of Word, I get this message box:

enter image description here

I tried several encodings but the text preview was always garbled.
I remember that 10 years ago (in 2010) my laptop crashed, I brought it to a shop that salvaged its disk, I think this file was one of the retrieved data.

Is there any way I can read the contents of my document again?

Postgresql: Write Binary File on Disk using ( Large Object )

I am trying to write a binary file (dll) to a disk using postgresql (large objects)

first I encoded the dll file using base64

cat file.dll | base64 -w 0
TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAA......

Then I used python code to automate the process, since the large object take only 2048 byte on each page ( first page with UPDATE Query) and then all pages take INSERT

file = "TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAA......"

def insert_file(url, loid):
    log("(+) write the file of length %d into LO..." % len(file))
    for i in range(0,int(round(len(file)/2048))):
        file_chunk = udf(i*2048:(i+1)*2048)
        if i == 0:
              sql = "UPDATE PG_LARGEOBJECT SET data=decode('%s', 'base64') where loid=%d and pageno=%d" % (file_chunk, loid, i)
        else:
              sql = "INSERT INTO PG_LARGEOBJECT (loid, pageno, data) VALUES (%d, %d, decode('%s', 'base64'))" % (loid, i, file_chunk)

Finlay, Exporting the dll to filesystem

def export_file(url, loid):
    log("(+) Exporting UDF library to filesystem...")
    sql = "select lo_export(%d, 'C:\Users\Lab\file.dll')" % loid

However, The file has been written on the Disk, but it’s corrupted .. I am not sure where is the problem. but I feel it’s in the encoding it self, Do I need to escape some characters that should not use with postgresql ?

sharepoint online – Extract the Name of the uploaded file in document library into another column

You cannot use a calculated column with the name column. You can build a designer workflow that runs on item creation and update the columns with data extracted from file name.

Alternatively, as you seem to be using SharePoint online you can also build a Microsoft flow to do the same.

enter image description here

To extract the first part I used the expression

substring(triggerBody()?('{Name}'), 0, lastIndexOf(triggerBody()?('{Name}'),'-'))

For the second part I used an expression

substring(triggerBody()?('{Name}'), add(lastIndexOf(triggerBody()?('{Name}'),'-'),1), sub(sub(length(triggerBody()?('{Name}')),lastIndexOf(triggerBody()?('{Name}'),'-')),1))

python – HTML – Selenium – Entering information into an HTML file through the GUI

I’m working on my own project, mostly for the sake of learning but also to solve a minor problem of mine. I’m currently stuck in said project since I can’t find a good solution for my problem. Thus I would like to ask for some solution-design-help. In terms of knowledge, I’m experienced with python, but new to Selenium and HTML.

The Project

The general Idea:

  • Receive a query string from the user
  • With that query grab information from a local Mongo-database that is related to that
  • Go to a specified website, login and click your way to a html file, using selenium
  • Enter the grabbed information on that html via GUI using selenium

Selenium is necessary as said website does not have an API to allow you to enter this stuff from the outside.

More Specifically:

The website is roll20, the string is the name of Dungeons-and-Dragons custom-made spells and the information is all the information related to said spell and the html spreadsheet is a DnD Charactersheet. So this is about automatically entering custom-made dnd spells from a local database into roll20 character sheets.

The Problem

The general Problem I’m trying to solve is this :

Insert the information from the database into the HTML. Enter it into a form with the same general elements, but different positioning on the HTML depending on the information you have (the spell’s level which is either Cantrip or 1-9).

But I can’t find a good solution on how to do it. I can, to a degree, predict the XPath of the element that need to be filled in (see screenshot 2). But that seems like more work than it should be, particularly since reading the HTML is difficult due to it having over 20 levels of indentation. Is that the only decent solution, predicting the XPath that the elements of will have into which I want to enter my information? Or is there another way? Am I using the wrong tool for the job?

In case anyone wants to take a look, here an example HTML file that I suggest looking at in Chrome.

DnD Charactersheet Spell section. Displays where all the spell can be created. Overall 10 places possible, creation happens by clicking "+".
The HTML elements that need to be filled in by Selenium

google drive – What happens to my file in shared folder after shared folder is no longer shared

If I upload a file into a shared folder on Google Drive, that file is automatically visible and available to everyone that has access to that shared folder. If the owner of the folder then removes the sharing from the folder (i.e. the folder becomes private), what happens to my file?

From my observation, the file still counts toward my quota, but I no longer can access this file myself. Thus the file is still eating up into my quota, yet I am unable to delete it.

In my case, it’s a 5 GB video file that was uploaded into a shared folder. I don’t care about not being able to access it (it was recorded for somebody else and uploaded into a folder shared by that somebody else). However it seems to be eating into my GDrive quota, which for the free account is only 15 GB. Thus I seem to have permanently lost a third of my quota this way.

How does this work then?

Less file not included on frontend. Magento 2.3.4

I’ve got a really strange issue, If I choose to minify or merge css, then my site breaks, but not straight away, it seems to take some time and then eventually style-l.css just doesn’t get included on the frontend.

The file is still generated, but just not included on the front end, but it seems to happen after time (or after heavy load?).

I can’t see to work out why it wouldn’t be included. When the site breaks, the style-l.css link is still accessible – so it’s not as if the file is then missing.

Has anyone come across this before? Tearing my hair out as there are no error messages anywhere in the logs. I can’t even work out when the site will break or what I can do to make it break to begin testing, it’s completely random.

Using magento 2.3.4, Smartwave Porto theme patched to the latest version.

design – Architecture issue re best IPC method for multiple file descriptors in one process

This is question about architecture in an application that uses POSIX IPC to communicate between threads. The application uses multiple threads (clients) to send data to a single receiving (server) thread. Each thread is assigned to a separate core using its affinity mask. The threads are all within a single process – no process boundaries to cross. The most important factors are performance and reliability.

Currently I use a named pipe (FIFO) to communicate between the multiple writers and the single reader. The writers all use the same file descriptor and the reader reads from a single pipe.

However, the data must be processed in core (thread) order, with core 0 first, then core 1, then core 2, etc. With only a single pipe the application must organize the incoming messages in core order which adds extra processing overhead. The messages are added to a memory buffer maintained by the server side.

A better architecture from the standpoint of the reader (server) would be to use a separate pipe/socket/shared memory (or other IPC method) for each client. The server would read from each of the client file descriptors in core order, processing each record as it comes in, then read and process data from the next core, in a round-robin fashion. That way the server does not need to organize and process the records in core order, which is expensive. The server just receives them one at a time and processes them immediately upon receipt, then read from the next core in sequence, etc. No expense of a memory buffer or the overhead of organizing the records as they come in.

My question is, given the requirement described above, which of the POSIX IPC methods would be the best and most performant solution for this situation? I’m planning to go up to as many as 64 cores, so I would need up to as many as 63 file descriptors for the client side. I don’t need bidirectional commo.

The lowest system overhead would (I think) be an anonymous pipe. The server side could simply loop through an array of file descriptors to read the data. However, I’m not clear whether an anonymous pipe can be used for threads in a single process because, “It is not very useful for a single process to use a pipe to talk to itself. In typical use, a process creates a pipe just before it forks one or more child processes.” https://www.gnu.org/software/libc/manual/html_node/Creating-a-Pipe.html#Creating-a-Pipe

I currently use named pipes, which do work with threads in a single process, and which should work with multiple file descriptors.

I have also used UNIX domain datagram sockets with a single socket. My impression is that multiple sockets may be more system overhead than I need for this situation, but they may be the most performant solution.

Finally, I have considered POSIX shared memory, where each client thread has its own shared memory object. Shared memory is often described as the fastest IPC mechanism (https://www.softprayog.in/programming/interprocess-communication-using-posix-shared-memory-in-linux)

But with shared memory, there is the problem of synchronization. While the other IPC methods are basically queues where the data can be read one record at a time, shared memory requires a synchronization object like a semaphore or spinlock. As the man pages say, “Typically, processes must synchronize their access to a shared memory object, using, for example, POSIX semaphores.” (https://www.man7.org/linux/man-pages/man7/shm_overview.7.html.)
My concern is that the extra synchronization overhead may reduce the usefulness of shared memory in this situation.

Moreover, despite being billed as the fastest method, I am concerned about possible cache contention with shared memory. “(M)any CPUs need fast access to memory and will likely cache memory, which has two complications (access time and data coherence).” https://en.wikipedia.org/wiki/Shared_memory.

I could test each of these solutions, but before I choose one it would be helpful to ask the opinions of others on which IPC method would be the best for multiple pipes/sockets/shared memory for multiple clients, as described above.

Use Powershell to Download Webfiles Based on UTF8 Text File List

Aiming to

  • download website files via powershell;
  • get the name from each line of a text file;
  • with UTF8 support (supporting international text names);
  • & save the files to a directory with the same name.

This script will be then applied to 20+ folders to save tedious work.

Test One:

Get-Content "C:testfilename.txt" | ForEach-Object {Write-Host "http://Website_Address/$_.mp3" }

Powershell Screen Print

Test Two

$client = new-object System.Net.WebClient
$client.Encoding = (System.Text.Encoding)::UTF8
$client.DownloadFile("https://web.com/name.mp3","C:testname.mp3")

Downloads Single File Correctly

Test Three

Get-Content "C:testfilename.txt" | ForEach-Object {
$client = new-object System.Net.WebClient
$client.Encoding = (System.Text.Encoding)::UTF8
$client.DownloadFile("https://web.com/'$_'.mp3","C:test'$_'.mp3")
 }
Clear-Variable -Name "client"

Powershell Error
(Giving Errors 2 and does not save as expected, spent several hours on it)

Filename.txt

11111
22222
33333
44444
55555
66666
77777
88888
99999

python – Code for reading file in jupyter notebook

df = pd.read_csv(“C:UsersVishnuDocuments/data_python.csv”)

I downloded this file in encrypted form and extracted in documents in the C drive with above specified location

this is code i have wrote for the file saved in drive C with the location specified below

File “”, line 1
df = pd.read_csv(“C:UsersVishnuDocuments/data_python.csv”)
^
SyntaxError: (unicode error) ‘unicodeescape’ codec can’t decode bytes in position 2-3: truncated UXXXXXXXX escape

How to remove this error and read this file in jupyter notebook
‚Äč