sharepoint online – Can’t drag & drop files into library and users cannot download files

I have a document library that contains about 8000 files in various folders in SharePoint Online.

Up until today, I haven’t had any issues, but starting today, I can no longer drag & drop files into any of the folders. When I drag a file over the library, it says the normal “Drag the items to any location” but dropping them does nothing.

I can use “New -> Item” and upload files individually however.

Another user who needs to get the files now goes into the library and does not have a Download button on the bar at the top of the library. She only has “Delete Item” and “Move To”. That’s it.

I created a new test document library in the same site, and I can drag & drop to it without a problem. Did MS roll out some restriction today?

magento2.3 – 2. Assigning multiple drop ship vendors for a single SKU in Magento 2.4.2

  1. Assigning multiple drop ship vendors for a single SKU:

What would be best way to have an item record store “Drop Ship Vendor Information” for that specific SKU or SKU Variant (i.e. product comes in two sizes: small and large. Size Small via drop ship supplier 1 and size large via drop ship supplier 2). In addition, we will connect to the drop ship vendors via EDI to obtain up-to-date costs and QTY Available (aka stock availability).

For instance, each SKU should have an option to add a minimum of 4 drop ship vendors:

Vendor 1 Name Vendor 1 SKU Vendor 1 Cost Vender 1 QTY Available

Vendor 2 Name Vendor 2 SKU Vendor 2 Cost Vender 2 QTY Available

Vendor 3 Name Vendor 3 SKU Vendor 3 Cost Vender 3 QTY Available

Vendor 4 Name Vendor 4 SKU Vendor 4 Cost Vender 4 QTY Available

This way, our system and our customer service agents are able to know which vendor or vendors have this item. I need to have this info on the Magento item record since my CS agents won’t have to our ERP system.

This is how it’s done now with our NetSuite ERP (from the item record):

NOTE: Not all Drop Ship vendors use the same SKU (Code) for the same exact item, therefore, it’s important to have SKU for each vendor.

Any suggestions, please?

migration – How can there be so much “business logic” for a company that they cannot drop their old COBOL/mainframe code?

Something which has always confused me is this. I keep hearing about how these big, old corporations which were around in the 1950s (for example) and early on started using COBOL-coded business logic on IBM mainframes, which are apparently unable to “migrate to something modern”, even though they want to and it’s expensive to maintain COBOL programs for various reasons.

First of all, I, probably more than anyone, like the idea of old, big computers chugging away decade after decade. It makes me feel cozy somehow. But this is not about me having a fascinating of old computers and stable software, but simply me wondering about the business need to keep running them even if they have established that it’s financially problematic, and the company doesn’t have a CEO who happens to have a fondness of old computers.

Basically, how can a company have so much “business logic” that they cannot simply hire well-paid experts to re-implement it all as PHP CLI scripts, for example? (No, that’s not a joke. I’d like to hear one single valid argument as to why PHP would be unfit to process all the business logic of a major corporation.) But let’s not get hung up on PHP, even though I’d be using it. Any “modern” solution that they can maintain for a fraction of the COBOL/mainframe price would do.

Is the answer simply that there is no such thing? All modern software/hardware is unreliable, ever-changing, ever-breaking garbage? Or do they have such extreme amounts of special rules and weird things happening in their “business logic”, spanning millions of lines of code, that the sheer amount of work to translate this over to a modern system simply costs too much? Are they worried that there will be mistakes made? Can’t they do this while keeping the old system and run both at the same time for a long time and compare the output/result, only replacing the old one when they have established that the new one works identically for years and years?

I don’t understand why they would insist on COBOL/mainframes. I must be grossly underestimating the kind of “code” that exists in a big, old company.

Do they really have zillions of special rules such as:

If employee #5325 received a bonus of over $53 before the date 1973-05-06, then also update employee #4722's salary by a percentage determined by their performance score in the last month counting from the last paycheck of employee #532

? I almost can’t believe that such fancy, intricate rules could exist. I must be missing something.

postgresql – Drop intermediate results of recursive query in Postgres

I’m trying to aggregate the contents of a column from a directed acyclic graph (DAG) stored in a Postgres table.

Each dag row has an id, bytes, and may also have a parent referencing another row. I’m trying to write a function which identifies every “tip” in the graph starting from a particular parent id. The result needs to include each graph tip and its aggregated collected_bytes from the parent id to that tip. (The DAG can be very deep, so some collected_bytes arrays can have millions of elements.)

The function below works, but memory usage grows quadratically as the collected_bytes get longer. The results CTE keeps a copy of every iteration of collected_bytes until the end of the query, then the ranked CTE is used to select only the deepest node for each tip.

I think I’m approaching this incorrectly: how can I do this more efficiently?

Is it possible to instruct Postgres to drop the intermediate results as the recursive query is running? (So we can also skip the ranked CTE?) Is there a more natural way to achieve the result below?

DROP TABLE IF EXISTS dag;
CREATE TABLE dag (
    id bigint PRIMARY KEY,
    parent bigint,
    bytes bytea NOT NULL,
    FOREIGN KEY (parent) REFERENCES dag(id)
);
INSERT INTO dag (id, parent, bytes) VALUES  (0, NULL, 'x0000'),
                                            (1, NULL, 'x0100'),
                                            (2, 0, 'x0200'),
                                            (3, 1, 'x0300'),
                                            (4, 2, 'x0400'),
                                            (5, 3, 'x0500'),
                                            (6, 4, 'x0600'),
                                            (7, 5, 'x0700'),
                                            (8, 4, 'x0800');

DROP FUNCTION IF EXISTS get_descendant;
CREATE FUNCTION get_descendant (input_id bigint)
    RETURNS TABLE(start_id bigint, end_id bigint, collected_bytes bytea(), depth bigint)
    LANGUAGE sql STABLE
    AS $$
    WITH RECURSIVE results AS (
            SELECT id AS start_id, id AS end_id, ARRAY(bytes) AS collected_bytes, 0 AS depth
                FROM dag WHERE id = input_id
        UNION ALL
            SELECT start_id,
                   dag.id AS end_id,
                   collected_bytes || dag.bytes AS collected_bytes,
                   depth + 1 AS depth
                FROM results INNER JOIN dag
                    ON results.end_id = dag.parent
                WHERE depth < 100000
    ),
    ranked AS (
        SELECT *, rank() over (PARTITION BY start_id ORDER BY start_id, depth DESC) FROM results
    )
    SELECT start_id, end_id, collected_bytes, depth FROM ranked WHERE rank = 1;
$$;

Here’s the result for 0, which has two valid tips, an id of 6 and 8. The collected_bytes field is the aggregation of bytes along each path:

postgres=# SELECT get_descendant.* FROM get_descendant(0::bigint);
 start_id | end_id |              collected_bytes              | depth 
----------+--------+-------------------------------------------+-------
        0 |      6 | {"\x0000","\x0200","\x0400","\x0600"} |     3
        0 |      8 | {"\x0000","\x0200","\x0400","\x0800"} |     3
(2 rows)

While here are the intermediate results before being ranked (and only the maximum depths selected):

postgres=# SELECT get_descendant.* FROM get_descendant(0);
 start_id | end_id |              collected_bytes              | depth 
----------+--------+-------------------------------------------+-------
        0 |      0 | {"\x0000"}                               |     0
        0 |      2 | {"\x0000","\x0200"}                     |     1
        0 |      4 | {"\x0000","\x0200","\x0400"}           |     2
        0 |      6 | {"\x0000","\x0200","\x0400","\x0600"} |     3
        0 |      8 | {"\x0000","\x0200","\x0400","\x0800"} |     3
(5 rows)

As you can see, this implementation is already wasting ~half of the memory in use. How can I make this more memory efficient?

Thanks!

Trying to populate choices in drop down on PowerApps based on SharePoint People Picker field

I have a PowerApps app that connects to a SharePoint list. There is a column, Approver, that I want to use to filter a report. Approver is a People Picker field. I want to have a drop-down input box on my PowerApps screen that has the names of Approvers from the SharePoint list, so that I can filter the results based on approvers rather than typing in a name from the GAL. Is this possible, and if so, any tips on how to do this?

postgresql – Cannot drop databae in postgres

I would like to unserstand why I am not able to drop a database from cmd:
I execute the command: drop database if exists <db_name> but the result is as below:

postgres=# drop database testV5;

ERROR:  database "testV5" does not exist

knowing that the database exists:

postgres=# l
                                  Liste des bases de donnÚes


   Nom     | PropriÚtaire | Encodage |  Collationnement   |    Type caract.    |    Droits d'accÞs
  testV4   | postgres     | UTF8     | French_Canada.1252 | French_Canada.1252 |
  testV5   | postgres     | UTF8     | French_Canada.1252 | French_Canada.1252 |
  postgres | postgres     | UTF8     | French_Canada.1252 | French_Canada.1252 |
  template0| postgres     | UTF8     | French_Canada.1252 | French_Canada.1252 |          
  template1| postgres     | UTF8     | French_Canada.1252 | French_Canada.1252 |         
    

Can anyone explain that please ?
Thanks in advance.

sharepoint online – Unable to style the Label Color of a Office UI Fabric React Drop Down Control

I am creating an SPFx 1.11 web part where I have imported Dropdown, IDropdownProps, IDropdownOption from office-ui-fabric-react as:

  import {   
  Dropdown,
  IDropdownProps,
  IDropdownOption  
} from 'office-ui-fabric-react';

However I need to Style the Label of the DropDown for which I need to import IDropdownStyles as well. However when I try to import IDropDownStyles, I get the below error:

office-ui-fabric-react/lib/Dropdown”‘ has no exported member ‘IDropdownStyles

Please find the attached code screenshot. The dependencies from package.json are listed below :

"dependencies": {
    "@fluentui/react": "^8.8.0",
    "@microsoft/sp-core-library": "~1.4.1",
    "@microsoft/sp-lodash-subset": "~1.4.1",
    "@microsoft/sp-office-ui-fabric-core": "^1.11.0",
    "@microsoft/sp-webpart-base": "~1.4.1",
    "@types/react": "15.6.6",
    "@types/react-dom": "15.5.6",
    "@types/webpack-env": ">=1.12.1 <1.14.0",
    "react": "15.6.2",
    "react-dom": "15.6.2"
  },
  "devDependencies": {
    "@microsoft/sp-build-web": "~1.4.1",
    "@microsoft/sp-module-interfaces": "~1.4.1",
    "@microsoft/sp-webpart-workbench": "~1.4.1",
    "gulp": "~3.9.1",
    "@types/chai": ">=3.4.34 <3.6.0",
    "@types/mocha": ">=2.2.33 <2.6.0",
    "ajv": "~5.2.2"
  }

Can anyone guide me on what I may be doing wrong here?

enter image description here

deletion – How to drop unwanted data when you are too good at backing things up?

I have a vast collection of pictures/photos of a certain kind, extremely carefully curated in countless steps, leaving only “10/10”-grade ones. Having realized that it just poisons my mind to keep looking at what I can never have in reality, I’ve decided to get rid of them. A number of times by now.

You see, I first delete the directory tree on my live machine. Then I sync the local backups (different disks in the same machine) so that they all don’t have a copy. But since I have a rolling backup scheme of all my important files, and full disk backups, I have copies of the dir tree on each of my backup disks and sticks. Following my schedule, it will take a full year to cycle through it, so that none of them contain a backup of the deleted dir tree.

Each time I sync a couple of disks (every other month), or every 14 days for the USB sticks, I am tempted to copy over the backup to my local machine, and I keep doing it. Over and over. And soon it’s re-synced again. And the whole thing has to start over.

Sitting there and going through every storage device at the same time would take many hours of labor, and expose my backup disks to potential malware that might be installed right now on my machine and which will infect/corrupt/delete them all before I notice. So it would be very annoying and also very dangerous.

How am I supposed to ever get rid of this data when I have this rock-solid backup system? I frankly never foresaw this problem when I started doing it.

index – SQL Server – Overlapping NC Indexes, Drop Question

If you drop IX_1, all queries that were previously using it will now start using IX_2. With IX_2 being larger (one more key field and two included fields), it has:

  • More non-leaf level pages to support the second key column
  • More leaf level pages to store the INCLUDE column values for each row

Any queries that currently use IX_1 most likely don’t need these extra fields, otherwise they’d already be using IX_2. With IX_1 gone, they’re going to have to read a lot more data to be able to produce the same result set.

A single column index like IX_1 is typically very low impact to write workloads, depending on the datatype. I can’t see any compelling reason to drop it.