team – GitHub source of truth describing repo ownership

I want to create a source of truth describing which team in my organisation owns which GitHub repo. In theory, the team who owns a repo may not be the same team who created it. Ideally I would like this information to live inside GitHub for convenience.

When I go to a repo, I want to quickly be able to see which team owns it. And for a given team, I want to be able to see all the repos they own.

I know that teams can be assigned read/write access to repos, but I don’t want the code owners to have any special permissions. Anyone in my org should be able to read/write to any repo, irrespective of being a code owner – the idea of ownership here is purely informational.

How can I do this?

c# – Should Source Generators be used as a replacement for class libraries

You’re mixing a few concepts here.

You’ve got class wrappers, you’ve got code generators, you’ve got build-on-deploy. The solution you’re talking about works if you can safely build the application per deployment, which is a big assumption.

A different architectural approach might be to making your Logging feature an interface, and allowing any all implementations of that interface, and perhaps scanning for implementations of that interface at runtime.

Here’s an example of how you might make that work using Reflection:

https://stackoverflow.com/questions/26733/getting-all-types-that-implement-an-interface

In terms of deployment, you’re saying something like “This app logs to anything that implements IMyLog, so if you want logging to work make sure you drop a DLL in the runtime directory that has an IMyLog in it.”

My point is that you seem to be solving a deployment problem with a complicated workaround that might force you to compile the application individually for each installation. I’m suggesting that you consider solving the deployment problem at the point of deployment instead.

windows 10 – Target NVME SSD same size as Source after cloning

I’ve just done an offline clone of a 1tb nvme SSD to a 2tb nvme SSD (I tried software cloning a few times but I always had issues booting to windows). Whilst the clone does boot, it shows the exact same size as the original source disk, meaning 1tb of storage is missing, and not in unallocated.

Disk Management Screenshot

EDIT: To be clear, I have also tried shrinking the volume already and I’m afraid that didn’t work, nor did extend without parameters in diskpart

open source – Maintenance Activities and maintainability

Question 1

Assuming a project is started in as and Open Source project and is developing. Is dividing and labeling the maintenance activity is useful while maintaining the project?
-How it is useful for the contributes?
-How it is useful for who is using the project?
-How it is useful for someone who is looking for some opensource project to use?

Question 1

Assuming a project is started in an organization and is completed in given time and is according to its initial requirements. After the initial phase of completion the maintenance phase start.
Is dividing and labeling the maintenance activity is useful while maintaining the project for some one in side the organization?

Assuming all Maintenance activity belong to one of them

-Corrective maintenance
-Adaptive maintenance
-Perfective maintenance
-Preventive maintenance

Resource

ISO/IEC/IEEE International Standard for Software Engineering – Software Life Cycle Processes – Maintenance,” in ISO/IEC 14764:2006 (E) IEEE Std 14764-2006 Revision of IEEE
Std 1219-1998) , vol., no., pp.1-58, 1 Sept. 2006, doi: 10.1109/IEEESTD.2006.235774.

network – In what scenarios is relying on source IP address as a security control acceptable?

Say we have the source IP always authenticated. So the source IP is guaranteed to be the sender of the packet.

Would it be able to counter attacks like
SQL Injections
MITM
TCP SYN Flooding
Cross-Site request forgery
or even XSS (cross-site scripting)

My thoughts are that it would give weak protection at best.

godot – Is it better to connect signals in source code or with the IDE feature?

In Godot IDE, what are the advantages and the caveats of using the visual IDE to connect signals to listerners instead of doing this:

var state: bool

onready var countdown: Timer = $Countdown

func _ready() -> void:
    state = true
    countdown.connect("end", self, "_toggle")

func _toggle() -> void:
    state = not state
```

powerbi – Unable to get Sharepoint Online data Web data source for Power BI

What I wanted to do is to get the data for Power BI from web datasource. The excel data was uploaded in sharepoint online as document and then I copied link for url data source on the power BI side. It was working perfectly few days ago. But now whenever I want to retrieve data from web it always says “

DataSource.error

web browsercontents currently supports only anonymous credentials

enter image description here

Even if I clear the global permission through File>Options and Settings>Data source Settings>Global Permission>Clear All Permission, the results is still unable to get the data from web source. What could go wrong?
Any suggesiont?I have been 3 days stuck in this problem and unable to solve it until now.

Appreciate for your help.
Best

c# – Exposing an event source as an interface, but where the source may expose events individually or as a collection

I am using a publish-subscribe event broker, and am constructing libraries to help people interact with this broker consistently.

One of the things I would like to design is some kind of IEventSource interface that each potential publisher could implement. The broker libraries would then natively understand the IEventSource interface, meaning the application developers would not need to code up all of the plumbing to get events from their system into the broker as publishers.

In trying to think about what this interface should look like, I am running into one fundamental problem: Some event sources will want to publish events in batches (for example, events coming from something like a database with change data capture, or events coming from files full of records). Other systems can natively raise events in a serial fashion as they happen, and would be publishing them one at a time.

So, what does my IEventSource look like? Does it expose the available events as a collection? Some kind of stream? As individual events via a GetEvent() method?

I could of course decide that the available events are always exposed as a collection, and if a particular publisher is publishing one at a time, they simply have a collection with a single member. But this seems inelegant and inefficient.

One important consideration is that the IEventSource will need to have some way of being told that the available events have been successfully received, because the goal is to guarantee successful transmission to the broker, with no events being lost. To use the cdc-enabled-database example again, we might get a batch of 500 events from the source, we then start publishing to the broker, and encounter some kind of issue on the 301st event. Ideally we wouldn’t want to retransmit all 500 messages, just pick up again at 301 and send the last 200. So we’d want to tell the source “you can go ahead and discard the first 300 now”. So in addition to some way to get events from the source (individually or in batches) I would want some way to “acknowledge” the receipt of events back to the source – again, individually or in batches.

I could, of course, simply have an IEventBatchSource and an ISingleEventSource, and build two different implementations using those two different interfaces all the way through the plumbing. But for obvious reasons I’d prefer not to write all of the plumbing twice.

8 – Custom migration source plugin not enabled in source site error

I’m trying to write a custom source plugin for a d7-d8 migration. I want to extend the DrupalnodePluginmigratesourced7 Node.php plugin.

I’m getting the following error when I try to run the migration from the UI:

Migration my_node_page did not meet the requirements. The module
my_xtra is not enabled in the source site. source_module:
my_xtra.

When I run drush ms I get a different error

No database connection configured for source plugin variable

The database connection key is specified in the group yml and works for other migrations which don’t use the custom source plugin.

Here’s my yml file

id: my_node_page
label: Node Basic Page
audit: true
migration_tags:
  - Drupal 7
  - Content
migration_group: my_group
source:
  plugin: my_xtra_node
  node_type: 'page'
process:

Here’s my custom plugin.

    <?php
    
    namespace Drupalmy_extraPluginmigratesource;
    
    use DrupalnodePluginmigratesourced7Node;
    
    /**
     * Drupal 7 node source from database.
     *
     * @MigrateSource(
     *   id = "my_xtra_node",
     *   source_provider = "node",
     *   source_module = "my_xtra",
     *   
     * )
     */
    class NodeXtra extends Node {
    
    
      /**
       * {@inheritdoc}
      */
      public function query() {
        // Select node in its last revision.
        $query = $this->select('node_revision', 'nr')
          ->fields('n', (
            'nid',
            'type',
            'language',
            'status',
            'created',
            'changed',
            'comment',
            'promote',
            'sticky',
            'tnid',
            'translate',
          ))
          ->fields('nr', (
            'vid',
            'title',
            'log',
            'timestamp',
          ));
        $query->addField('n', 'uid', 'node_uid');
        $query->addField('nr', 'uid', 'revision_uid');
        $query->innerJoin('node', 'n', static::JOIN);
    
        // If the content_translation module is enabled, get the source langcode
        // to fill the content_translation_source field.
        if ($this->moduleHandler->moduleExists('content_translation')) {
          $query->leftJoin('node', 'nt', 'n.tnid = nt.nid');
          $query->addField('nt', 'language', 'source_langcode');
        }
        $this->handleTranslations($query);
    
        if (isset($this->configuration('node_type'))) {
          $query->condition('n.type', $this->configuration('node_type'));
        }
    
        return $query;
      } 
}

Thanks!