Parallelizing download of git submodules not working?

I’m updating git submodules with --jobs to speed up cloning:

git submodule update --recursive --jobs 3

but it Cloning into XXX one by one, which seems cloning in serial order. Is this the correct behavior?

I’m expecting multiple cloning simultaneously:

Cloning into XXX
Cloning into YYY
Cloning into ZZZ

git version 2.25.1 on Ubuntu 20.04

user profile – SharePoint 2019 My site not working

We have come across a strange issue with mysite creation on a SharePoint 2019 on-prem deployment where in mysites are not creating for the users

My site gives the following error

We’re sorry. Something went wrong with your My Site setup.
Please try again later or contact your help desk.

Event ID 8100 on event viewer,

Mysite provisioning failed for user:(i:0#.w|domainuser) with correlationid:(9583edc9-332f-4001-8e6d-298df66ec7d4) on retry attempt:(0) on queue type:(). Error:(SiteProvisioningException: ExceptionType: SelfServiceSiteCreate InnerException: Microsoft.SharePoint.SPException: Cannot complete this action.

Please try again. —> System.Runtime.InteropServices.COMException: Cannot complete this action.

Please try again.0x80004005
at Microsoft.SharePoint.Library.SPRequestInternalClass.ApplyWebTemplate(String bstrUrl, String bstrWebTemplateContent, Int32 fWebTemplateContentFromSubweb, Int32 fDeleteGlobalListsWithWebTemplateContent, Int32 fIgnoreMissingFeatures, String& bstrWebTemplate, Int32& plWebTemplateId)
at Microsoft.SharePoint.Library.SPRequest.ApplyWebTemplate(String bstrUrl, String bstrWebTemplateContent, Int32 fWebTemplateContentFromSubweb, Int32 fDeleteGlobalListsWithWebTemplateContent, Int32 fIgnoreMissingFeatures, String& bstrWebTemplate, Int32& plWebTemplateId)
— End of inner exception stack trace —
at Microsoft.SharePoint.SPGlobal.HandleComException(COMException comEx)
at Microsoft.SharePoint.Library.SPRequest.ApplyWebTemplate(String bstrUrl, String bstrWebTemplateContent, Int32 fWebTemplateContentFromSubweb, Int32 fDeleteGlobalListsWithWebTemplateContent, Int32 fIgnoreMissingFeatures, String& bstrWebTemplate, Int32& plWebTemplateId)
at Microsoft.SharePoint.SPWeb.ProvisionWebTemplate(SPWebTemplate webTemplate, String webTemplateToUse, SPFeatureWebTemplate featureWebTemplate, Page page, SPFeatureDependencyErrorBehavior featureDependencyErrorBehavior, ICollection1& featureDependencyErrors) at Microsoft.SharePoint.SPWeb.ApplyWebTemplate(SPWebTemplate webTemplate, Page page, SPFeatureDependencyErrorBehavior featureDependencyErrorBehavior, ICollection1& featureDependencyErrors)
at Microsoft.SharePoint.SPWeb.ApplyWebTemplate(String strWebTemplate)
at Microsoft.SharePoint.Administration.SPSiteCollection.AddInternal(SPSiteCollectionAddParameters param)
at Microsoft.SharePoint.Administration.SPSiteCollection.Add(SPSiteCollectionAddParameters param)
at Microsoft.SharePoint.SPSite.SelfServiceCreateSite(SPSiteCollectionAddParameters param)
at Microsoft.Office.Server.SiteProvisioning.SiteProvisioningManager`1.<>c__DisplayClass33.b__32()Microsoft.Office.Server.SiteProvisioning.SiteProvisioningException: Exception of type ‘Microsoft.Office.Server.SiteProvisioning.SiteProvisioningException’ was thrown. —> Microsoft.SharePoint.SPException: Cannot complete this action.

** Error in ULS logs**

Site creation failure for user ‘UserName + URL . The exception was: Microsoft.SharePoint.SPException: Cannot complete this action. Please try again. —> System.Runtime.InteropServices.COMException: Cannot complete this action. Please try again.0x80004005
at Microsoft.SharePoint.Library.SPRequestInternalClass.ApplyWebTemplate(String bstrUrl, String bstrWebTemplateContent, Int32 fWebTemplateContentFromSubweb, Int32 fDeleteGlobalListsWithWebTemplateContent, Int32 fIgnoreMissingFeatures, String& bstrWebTemplate, Int32& plWebTemplateId)
at Microsoft.SharePoint.Library.SPRequest.ApplyWebTemplate(String bstrUrl, String bstrWebTemplateContent, Int32 fWebTemplateContentFromSubweb, Int32 fDeleteGlobalListsWithWebTemplateContent, Int32 fIgnoreMissingFeatures, String& bstrWebTemplate, Int32& plWebTemplateId) –
— End of inner exception stack trace —
at Microsoft.SharePoint.SPGlobal.HandleComException(COMException comEx)
at Microsoft.SharePoint.Library.SPRequest.ApplyWebTemplate(String bstrUrl, String bstrWebTemplateContent, Int32 fWebTemplateContentFromSubweb, Int32 fDeleteGlobalListsWithWebTemplateContent, Int32 fIgnoreMissingFeatures, String& bstrWebTemplate, Int32& plWebTemplateId)
at Microsoft.SharePoint.SPWeb.ProvisionWebTemplate(SPWebTemplate webTemplate, String webTemplateToUse, SPFeatureWebTemplate featureWebTemplate, Page page, SPFeatureDependencyErrorBehavior featureDependencyErrorBehavior, ICollection1& featureDependencyErrors) at Microsoft.SharePoint.SPWeb.ApplyWebTemplate(SPWebTemplate webTemplate, Page page, SPFeatureDependencyErrorBehavior featureDependencyErrorBehavior, ICollection1& featureDependencyErrors)
at Microsoft.SharePoint.SPWeb.ApplyWebTemplate(String strWebTemplate)
at Microsoft.SharePoint.Administration.SPSiteCollection.AddInternal(SPSiteCollectionAddParameters param)
at Microsoft.SharePoint.Administration.SPSiteCollection.Add(SPSiteCollectionAddParameters param)
at Microsoft.SharePoint.SPSite.SelfServiceCreateSite(SPSiteCollectionAddParameters param)
at Microsoft.Office.Server.SiteProvisioning.SiteProvisioningManager`1.<>c__DisplayClass33.b__32().

Not sure if we are missing any dependency feature or missing any permission. We have ensured that all services and service applications are running fine. The UPS services is syncing all users as well correctly.

Any help will be greatly appreciated.

Thanks in advance

google kubernetes engine – GKE elastic search not working with Kibana

I am trying to use my elastic cluster of GKE using my domain for that I am using GKE Elastic And I am using an external load balancer and then I am using ingress to map it with URL.

I can access my elastic cluster via my domain: ebc.com and It is working fine with my existing python code. But when I am trying to use kibana using docker-compose:

version: '2'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:6.3.2
    environment:
      SERVER_NAME: localhost
      #ELASTICSEARCH_URL: https://<abc.com>
      ELASTICSEARCH_URL: http://<ip-address>:9200
    ports:
      - 5601:5601

When I am running this I can access elastic via kibana but when I am trying:

ELASTICSEARCH_URL: https://<abc.com>

It doesn’t work and it redirects me to:

http://localhost:5601/login?next=%2Fapp%2Fkibana#/home?_g=()

So my question is can we access elastic via domain or do I need to pass 9200 port all the time. In my ingress, I can’t add 9200.

So what should I do in order to achieve this goal open port in GKE load balancer or Ingress? Or there is any settings in kibana which I can use to access it using the normal domain.

Will there be (stronger) colour banding on 8bit wide gamut display when working in sRGB?

let’s say I have a monitor (8bit colour depth) that covers 100% AdobeRGB. Now I want to work on photos in sRGB for whatever reason. Will the monitor show in sRGB application more granular colour steps since it still can quantize the smaller colour of sRGB with 8bit resolution? Or will the absolute colour depth stay the same meaning the effective colour depth in sRGB will decrease to somewhat below 8bit?

I hope I could make my point. If you need any further explaination please let me know.

Thanks in advance!
Rummelbooz

powerapp – Patch Command Not Working, Creating the Record in the Existing Item

I have an application which have 3 leave types to select for the leave request,I am using a form control but submitting the data using the Patch Function.
I am showing the form fields according to the leave request selected in the dropdown in my form.

and on submit button I have set this Code:

   If(DataCardValue1.Selected.Value = "Time Off",
Patch(
   LeaveRequests,
   {
       ID:LeaveGallery.Selected.ID
       
    
   },
{
   LeaveID: CurrentUserID,    
   User:     DataCardValue6.Text,
   Requestor: MyUserEmail,
   Approver:
   {
 Claims: Concatenate(
          "i:0#.f|membership|",
          DataCardValue3.Selected.Email // Person email
          ),
          Department: "",
          DisplayName: "",
          Email: DataCardValue3.Selected.Email, // Person email
          JobTitle: "",
          Picture: ""
},
 Description: DataCardValue2.Text,
   TimeOffDate: DataCardValue22.SelectedDate,
   TimeOffFrom: DataCardValue19.Selected,
   TimeOffTo: DataCardValue20.Selected,
   TotalTimeRequested: Label23.Text,
   LeaveStatus:"Pending"
    }
  ),DataCardValue1.Selected.Value = "Annual Leave",
Patch(
   LeaveRequests,
   {
       ID:LeaveGallery.Selected.ID
       
    
   },
{
   LeaveID: CurrentUserID,    
   User:     DataCardValue6.Text,
 Requestor: MyUserEmail,
   Approver:
   {
 Claims: Concatenate(
          "i:0#.f|membership|",
          DataCardValue3.Selected.Email // Person email
          ),
          Department: "",
          DisplayName: "",
          Email: DataCardValue3.Selected.Email, // Person email
          JobTitle: "",
          Picture: ""
},
 Description: DataCardValue2.Text,
   StartDate:StartDate.SelectedDate,
   EndDate:EndDate.SelectedDate,
   DaysCount:Label25.Text,
   LeaveStatus:"Pending"
    }
),
DataCardValue1.Selected.Value = "Sick Leave",
Patch(
   LeaveRequests,
   {
       ID:LeaveGallery.Selected.ID
       
    
   },
{
   LeaveID: CurrentUserID,    
   User:     DataCardValue6.Text,
 Requestor: MyUserEmail,
   Approver:
   {
 Claims: Concatenate(
          "i:0#.f|membership|",
          DataCardValue3.Selected.Email // Person email
          ),
          Department: "",
          DisplayName: "",
          Email: DataCardValue3.Selected.Email, // Person email
          JobTitle: "",
          Picture: ""
},
 Description: DataCardValue2.Text,
   StartDate:StartDate.SelectedDate,
   EndDate:EndDate.SelectedDate,
   DaysCount:Label25.Text,
   LeaveStatus:"Pending"
    }
),
DataCardValue1.Selected.Value = "Casual Leave",
Patch(
   LeaveRequests,
   {
       ID:LeaveGallery.Selected.ID
       
    
   },
{
   LeaveID: CurrentUserID,    
   User:     DataCardValue6.Text,
   Requestor: MyUserEmail,
   Approver:
   {
 Claims: Concatenate(
          "i:0#.f|membership|",
          DataCardValue3.Selected.Email // Person email
          ),
          Department: "",
          DisplayName: "",
          Email: DataCardValue3.Selected.Email, // Person email
          JobTitle: "",
          Picture: ""
},
 Description: DataCardValue2.Text,
   StartDate:StartDate.SelectedDate,
   EndDate:EndDate.SelectedDate,
   DaysCount:Label25.Text,
   LeaveStatus:"Pending"
    }
)
);
 Navigate(SucessScreen,None); 

after submitting it stores the data into SP list, but when I create another record it overwrites onto the existing one, without creating a new record.

any help?

magento2 – set default filter status on sales order grid, not working

i try set default filter status on sales order grid with this code:

app/code/FP/Orders/Model/ResourceModel/Order/Grid/Collection.php

<?php

namespace FPOrdersModelResourceModelOrderGrid;

use MagentoFrameworkDataCollectionDbFetchStrategyInterface as FetchStrategy;
use MagentoFrameworkDataCollectionEntityFactoryInterface as EntityFactory;
use MagentoFrameworkEventManagerInterface as EventManager;
use PsrLogLoggerInterface as Logger;

use MagentoSalesModelResourceModelOrderGridCollection as OriginalCollection;

/**
 * Order grid extended collection
 */
class Collection extends OriginalCollection
{
    /**
     * @var MagentoBackendModelAuthSession
     */
    protected $_adminSession;
    public function __construct(
        EntityFactory $entityFactory,
        Logger $logger,
        FetchStrategy $fetchStrategy,
        EventManager $eventManager,
        MagentoBackendModelAuthSession $adminSession,
        $mainTable = 'sales_order_grid',
        $resourceModel = MagentoSalesModelResourceModelOrder::class
    ) {
        $this->_adminSession = $adminSession;
        parent::__construct($entityFactory, $logger, $fetchStrategy, $eventManager, $mainTable, $resourceModel);
    }
    
   protected function _renderFiltersBefore()
   {
    $objectManager = MagentoFrameworkAppObjectManager::getInstance();
    $request = $objectManager->get("MagentoFrameworkAppRequestHttp");
 
    $module_controller_action = $request->getActionName();
    if($module_controller_action == 'send'){
      $this->getSelect()->where("status = 'processing'");
    }else if($module_controller_action == 'fact'){
      $this->getSelect()->where("status = 'pending'");
    }
    parent::_renderFiltersBefore();
   }
}

This code does not work but, this one that I leave below does work.
I don’t understand why when “$this->getSelect()->where(” status = ‘pending’ “);” it is executed inside the conditional does not work, help please

<?php

namespace FPOrdersModelResourceModelOrderGrid;

use MagentoFrameworkDataCollectionDbFetchStrategyInterface as FetchStrategy;
use MagentoFrameworkDataCollectionEntityFactoryInterface as EntityFactory;
use MagentoFrameworkEventManagerInterface as EventManager;
use PsrLogLoggerInterface as Logger;

use MagentoSalesModelResourceModelOrderGridCollection as OriginalCollection;

/**
 * Order grid extended collection
 */
class Collection extends OriginalCollection
{
    /**
     * @var MagentoBackendModelAuthSession
     */
    protected $_adminSession;
    public function __construct(
        EntityFactory $entityFactory,
        Logger $logger,
        FetchStrategy $fetchStrategy,
        EventManager $eventManager,
        MagentoBackendModelAuthSession $adminSession,
        $mainTable = 'sales_order_grid',
        $resourceModel = MagentoSalesModelResourceModelOrder::class
    ) {
        $this->_adminSession = $adminSession;
        parent::__construct($entityFactory, $logger, $fetchStrategy, $eventManager, $mainTable, $resourceModel);
    }
    
   protected function _renderFiltersBefore()
   {
    
    $this->getSelect()->where("status = 'pending'");
    parent::_renderFiltersBefore();
   }
}

my app/code/FP/Orders/view/adminhtml/layout/orders_sales_fact.xml

    <page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd">

    <body>
       <referenceContainer name="content">
            <uiComponent name="sales_order_grid"/>
            <settings>
                <filterUrlParams>
                    <param name="status">processing</param>
                </filterUrlParams>
            </settings>
        </referenceContainer>
    </body>

</page>

my app/code/FP/Orders/etc/di.xml

<?xml version="1.0"?>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:framework:ObjectManager/etc/config.xsd">
    <type name="MagentoFrameworkViewElementUiComponentDataProviderCollectionFactory">
        <arguments>
            <argument name="collections" xsi:type="array">
                <item name="sales_order_grid_data_source" xsi:type="string">FPOrdersModelResourceModelOrderGridCollection</item>
            </argument>
        </arguments>
    </type>
</config>

us citizens – U.S. expat working for a U.S. employer: Do I need work authorization for the country I reside in?

I am a US citizen living in an EU country (Poland), and I’ve been living here for close to a year residing with a National D type visa. I have been offered a remote job at a US institution, but they are asking if I am authorized to work in Poland. Is this relevant to ask since I am not working with a Polish (or EU) based company? In other words, if I live abroad and want to work remotely for a job back home, do I need work authorization for the country I reside in?

journald compression not working – Ask Ubuntu

It seems the journal files in my Ubuntu 18 LTS server are not compressed despite compression is enabled by default (I did not change it in /etc/systemd/journald.conf) and journalctl claims to see compressed files:

# journalctl --header | grep PRESS | uniq
Incompatible Flags: COMPRESSED-LZ4

# journalctl --disk-usage
Archived and active journals take up 4.0G in the file system.

# journalctl -o verbose | wc
4 GB in 90 Mio lines  # same size as the journal files itself

# journalctl -o verbose | gzip | wc -c
193 MB  # reduced by a factor of 20

grep -v '^#' /etc/systemd/journald.conf
(Journal)
Storage=persistent
SystemMaxUse=4G
SystemKeepFree=4G
SystemMaxFileSize=100M
MaxFileSec=1week
SyncIntervalSec=1
LineMax=1K
ForwardToConsole=yes
MaxLevelConsole=crit
MaxLevelWall=alert
RateLimitIntervalSec=2min
RateLimitBurst=2000

When I try to compress one of the /var/log/journal/*/*.journal files I see a reduction by a factor of 5 while I know that already compressed data cannot be compressed by another compressor, so these files seem to be uncompressed.

Hot can I get systemd-journald to compress my journal ?