macOS 11: Get system information (GPU, resolution) programmatically (in C)

I need to access data about GPU and screen resolution in C, not using system_profiler, because it takes too long(.3s-.5s system_profiler SPDisplaysDataType), and needs greping and cutting, which is not that fast either.

Caching would be answer, but very short-sighted as someone can use different monitors etc.

Unfortunately system_profiler is closed source.

motherboard – Does it make sense to plug monitors in integrated graphics to “save” the GPU for other work?

I have a GPU for the purposes of deep learning, and I have 3 monitors plugged into it. I also have 2 empty display slots in my motherboard. When I run a deep learning task my GPU shows as running at 100%, and small apps that rely on the GPU get performance issues (eg Xournal++ for writing notes with a stylus pad).

If I plug two of my monitors into the motherboard:

  • Would that result in more capacity for the GPU to handle my deep learning tasks?
  • Would it mean that if I use my small GPU dependent apps on the motherboard monitors, there wouldn’t be performance issues?

what would happend if you connect desktop gpu to laptop hdmi?

what would happend if you connect desktop gpu to laptop hdmi? – Super User

hash – Since GPUs have gigabytes of memory, does Argon2id need to use gigabytes of memory as well in order to effectively thwart GPU cracking?

The common advice of benchmarking a password hashing algorithm and choosing the slowest acceptable cost factor doesn’t work for algorithms with more than one parameter: adding a lot of iterations at the expense of memory hardness makes the benchmark time go up, but if the attacker can still grab off-the-shelf hardware (like a regular gaming graphics card) then I might as well have used pbkdf2 instead of Argon2id. There must be some minimum amount of memory that makes Argon2id safer than a well-configured algorithm from the 90s, otherwise we could have saved ourselves the effort of developing it.

I would run benchmarks and just see for myself at what point hashing isn’t faster anymore on a GPU than on a CPU, but Hashcat lacks Argon2 support altogether. (Though any software I choose isn’t necessarily going to be the fastest possible implementation to give me a correct answer anyway.)

Even my old graphics card from 2008 had 1GB of video memory and 2020 cards seem to commonly have 4-8GB. Nevertheless, the default Argon2id setting in PHP is to use 64MiB of memory.

If you set the parallelism of Argon2id to your number of CPU cores (say, 4) and use this default memory limit, is either of the following (roughly) true?

  • A GPU with 8192MB of memory can still use 8192/64 = 128 of its cores, getting a 32× speed boost compared to your CPU. The slowdown on GPUs due to increased memory requirements is a linear scale. The only way to thwart GPU cracking effectively, is to make Argon2id use more RAM than a high-end GPU has when divided by the number of parallelism you configure in Argon2id (i.e. in this example with 4 cores, Argon2id needs to be set to use 8192/4 = 2048MB).


  • This amount, 64MiB, already makes a common consumer GPU completely ineffective for cracking because it is simply too big to efficiently use from a simple GPU core (because if the core has to access the card’s main memory then it’s not worth it, or something).

algorithms – How to program a GPU power or hashrate tester?

fyi I am not a programmer

Think of these Internet speed testing websites, you press on the “Test” button and it runs diagnostics to determine your bandwidth speed.

Now I am looking to program or build something similar in idea. A site where you can test your rigs or GPU’s hash rate power for mining (h/s).

Any pointers in which language could accomplish this and where to begin? I was considering using Python due to it’s usability in ML, but have no idea where to even start or what to research to build this.

All help and tips appreciated!

hardware – Is it possible to install a GPU in an HPE ProLiant DL180 G6 server?

I have an HPE ProLiant DL180 G6 server where I would like to install an old Radeon HD 7770 GPU.

Here’s a picture of the server taken from above:

The server came with two PCIe risers; one is used for an HBA, the other provides two PCIe slots (I assume both are x8, and one of them is used by an NVMe-to-PCIe adapter).

The GPU would then be installed in the free slot on the second PCIe riser, in the lower right corner of the chassis.

Here’s a couple of pictures of the location where the GPU would be installed:

View post on

View post on

I have the following questions:

  1. The GPU must be powered with a 6-pin connector
    but the only spare cable that comes from the PSU is a CD power cable

    Is there an adaptor that I can buy to power the GPU?

    If not, since the server allows for two PSUs (and currently only one
    is installed), can I buy a second PSU to power only the GPU?

  2. The second PCIe riser is surrounded by a metallic cage, which would
    prevent me from installing the GPU. Would it be possible or dangerous
    to completely remove it? It seems to serve as a physical supports for the PCIe risers.

  3. If I’m not mistaken, the GPU would be installed on a PCIe 2.0 x8 slot. Would performance
    suffer a lot from this, especially if in the future I changed it with a modern GPU?

python – No puedo usar pytorch 11.1 con GPU, usando una NVIDIA 730 GT, que debo hacer

Use GPU-Z para obtener las especificaciones de mi GPU, y su controlador en este caso 491.92

introducir la descripción de la imagen aquí

Aparentemente tendria que estar todo bien, no?
introducir la descripción de la imagen aquí

Luego instale la GPU-accelerated library of primitives for DL, NVIDIA cuDNN en su version…
introducir la descripción de la imagen aquí

Esta version que es compatible (en teoria), para CUDA 11.0, 11.1 y 11.2
introducir la descripción de la imagen aquí

Se que se debe escoger el pytorch en funcion del CUDA que quieras instalar, pero en este caso se que usare el pytorch para la 11.1 osea que elegi esa version.

Tambien instale el CUDA Toolkit 11.1.0, que creo en mi caso es el que es consistente con el resto pero estoy en dudas. Aun asi aqui dejo el link de donde lo baje con el exe(local).

introducir la descripción de la imagen aquí

Ahora instale el pytorch para la version 11.1 (que es la que queria) desde el gestor pip, simplemente poniendo el siguiente code copiado de la page:

pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f

introducir la descripción de la imagen aquí

Estube probando pytorch en consola con la impresion de un tensor, y aparentemente funciona perfecto, pero claro hasta ahora con eso solo pruebo que funcione torch con la CPU, ya que no especifique el device.

>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor(((0.1242, 0.4253, 0.9530),
        (0.2290, 0.8633, 0.2871),
        (0.3668, 0.5047, 0.7253),
        (0.9148, 0.0506, 0.3024),
        (0.3645, 0.1265, 0.1900)))

Luego ejecute esto:

import torch

Y me devolvio True, a lo que entiendo que CUDA si funciona (pero no es asi)

Estuve viendo gente a la que le ocurrio algo similar, pero no me funcionan las soluciones que plantean(o porque estan desactualizadas las soluciones, o no se hacerlo bien). Ellos dicen que instale pytorch desde el codigo fuente o algo asi…

Aun asi creo que el problema es pytorch.
y el cuda cc, imagino que debe ser un compiler pero no lo se con seguridad, que dicen?

En el siguiente link, plantean una “guia de instalacion algo complicada para mi al menos”

introducir la descripción de la imagen aquí

Fui a ese repositorio de github y descargue el proyecto a mi pc.

Intente ejecutar ese con torch anterior eliminado y sin torch anterior eliminado, y tira…

(base) C:UsersMIPCDesktopMATIVtuber_HPpytorch-master>python
Building wheel torch-1.9.0a0+gitUnknown
usage: (global_opts) cmd1 (cmd1_opts) (cmd2 (cmd2_opts) ...)
   or: --help (cmd1 cmd2 ...)
   or: --help-commands
   or: cmd --help

error: no commands supplied

Realmente no entiendo para que es eso…

Lo que me sigue dejando en duda es eso del compilador que pide en C++
Y respecto al CUDA Toolkit 11.1 y el NVIDIA cudDNN (en version 11.1) en teoria los podria dejar asi… como mostre que les instale mas arriba, no?

introducir la descripción de la imagen aquí

introducir la descripción de la imagen aquí

De todos modos, al no poder usar con GPU, adapte mi proyecto a CPU modificando todo lo que diga to_gpu o to_device, y andubo con CPU usando los 3 en 11.1, pero como CPU (lento, muy lento, per andubo, osea que con eso ya descarto que sea mi proyecto)

Si lo ejecuto con GPU, usando el supuesto CUDA 11.1 instalado me tira estos errores, y ahi el problema:

(base) C:UsersMIPCDesktopMATIVtuber_HPVtuberProjectAssetsTrackingBackend>python
starting up on port 65432
Using cache found in C:UsersMIPC/.cachetorchhubintel-isl_MiDaS_master
Loading weights:  None
Using cache found in C:UsersMIPC/.cachetorchhubfacebookresearch_WSL-Images_master
Using cache found in C:UsersMIPC/.cachetorchhubintel-isl_MiDaS_master
waiting for a connection
connection from ('', 13676)
Connection closed
Traceback (most recent call last):
  File "", line 39, in <module>
    pose_data = pose_estimator.get_pose_data(img.copy())
  File "", line 74, in get_pose_data
    heatmaps, pafs, scale, pad = self.infer_fast(img)
  File "", line 49, in infer_fast
    stages_output =
  File "", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "", line 134, in forward
    backbone_features = self.model(x)
  File "", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
File "", line 119, in forward
    input = module(input)
  File "", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "", line 119, in forward
    input = module(input)
  File "", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "", line 399, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "", line 395, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: no kernel image is available for execution on the device
( WARN:1) global C:UsersappveyorAppDataLocalTemp1pip-req-build-kh7iq4w7opencvmodulesvideoiosrccap_msmf.cpp (434) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

Trate de describir d ela mejor manera que pude todo lo que hice haber si ustedes encuentran el error 🙁 , pero sigue sin funcionar…
Probe si la camara es correcta y opencv la detecta y da video streaming osea que un problema con la webcam esta descartado.

Aun asi sigue tirando esto…

RuntimeError: CUDA error: no kernel image is available for execution on the device
( WARN:1) global C:UsersappveyorAppDataLocalTemp1pip-req-build-kh7iq4w7opencvmodulesvideoiosrccap_msmf.cpp (434) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

Ya no se mas que hacer para hacer funcionar a pytorch en mi pc, espero realmente puedan ayudarme. Como veran trate de explicarme lo mejor posible, pero encerio que no se mas que hacerle.

GPU disable on boot was successful for months – but cannot execute sh / anymore from Single User Mode

I am new to this platform and hope that you can help me out with this one.
I had this common GPU issue on a 2011 MBP that I was able to solve with the super detailed post by the user “LаngLаngС” (thanks so much!) as found here:
GPU problem – Boot Hangs on Grey Screen

Unfortunately a new problem occured that prevents me from running the script. I’ll explain:

To solve the GPU issue I everything as written in LangLangC’s post and I also had the executable sh script on my desktop – so when I had to boot the Mac again, I did this in single user mode with CMD + S, and then typed “sh /” and then reboot. This would boot my machine always as normal with the internal GPU.

So far so good – until yesterday. When trying to reboot again from Single User Mode, I got this new prefix “sh 3.2#” and the lines looked different. It says that the volume is “read only”-. When I try to execute “sh /” it says, file or directory not found. I cant run “sudo” commands as it says “command not found”. When I try to remount with “mount -uw” it doesn’t work – it always says “device is write locked.

When I type exit or reboot the system actually seems to boot “normal” – if there wasn’t the GPU issue! Meaning that if I cannot run the script “” from the single user mode, I cannot boot the GUI Mac OSX properly. Unfortunately I also cannot execute the sudo commands. I also tried to boot the Mac in target mode and “repair” the drive with the disk utility of another Mac, but that never worked either.

I was googling a lot and tried a lot but nothing worked yet. Since I am not very familiar with these kinds of issues, I was hoping to find an answer here, where I found the first answers to my GPU problem.

Basically my “new problem” is that I cannot run the automated script that I saved on the desktop or the sudo commands from the single user mode.

Hope you can help me! Many thanks in advance.


DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive New Proxy Lists Every Day Proxies123