Request: Xenforo | [EAE] Bumper 1.1.1 | NulledTeam UnderGround

See Update 2 for significant changes Description: Bumper adds new permissions to give members the the option to bump content without making a post. They can be set to allow bumping of only their content or others contents. You can also set when…

.

python – No puedo usar pytorch 11.1 con GPU, usando una NVIDIA 730 GT, que debo hacer

Use GPU-Z para obtener las especificaciones de mi GPU, y su controlador en este caso 491.92

introducir la descripción de la imagen aquí

Aparentemente tendria que estar todo bien, no?
introducir la descripción de la imagen aquí

Luego instale la GPU-accelerated library of primitives for DL, NVIDIA cuDNN en su version…
introducir la descripción de la imagen aquí

Esta version que es compatible (en teoria), para CUDA 11.0, 11.1 y 11.2
introducir la descripción de la imagen aquí

Se que se debe escoger el pytorch en funcion del CUDA que quieras instalar, pero en este caso se que usare el pytorch para la 11.1 osea que elegi esa version.

Tambien instale el CUDA Toolkit 11.1.0, que creo en mi caso es el que es consistente con el resto pero estoy en dudas. Aun asi aqui dejo el link de donde lo baje con el exe(local).

https://developer.nvidia.com/cuda-11.1.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10

introducir la descripción de la imagen aquí

Ahora instale el pytorch para la version 11.1 (que es la que queria) desde el gestor pip, simplemente poniendo el siguiente code copiado de la page:

pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

introducir la descripción de la imagen aquí

Estube probando pytorch en consola con la impresion de un tensor, y aparentemente funciona perfecto, pero claro hasta ahora con eso solo pruebo que funcione torch con la CPU, ya que no especifique el device.

>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor(((0.1242, 0.4253, 0.9530),
        (0.2290, 0.8633, 0.2871),
        (0.3668, 0.5047, 0.7253),
        (0.9148, 0.0506, 0.3024),
        (0.3645, 0.1265, 0.1900)))

Luego ejecute esto:

import torch
print(torch.cuda.is_available())

Y me devolvio True, a lo que entiendo que CUDA si funciona (pero no es asi)

Estuve viendo gente a la que le ocurrio algo similar, pero no me funcionan las soluciones que plantean(o porque estan desactualizadas las soluciones, o no se hacerlo bien). Ellos dicen que instale pytorch desde el codigo fuente o algo asi…

Aun asi creo que el problema es pytorch.
y el cuda cc, imagino que debe ser un compiler pero no lo se con seguridad, que dicen?

En el siguiente link, plantean una “guia de instalacion algo complicada para mi al menos”

introducir la descripción de la imagen aquí

https://github.com/pytorch/pytorch#from-source

Fui a ese repositorio de github y descargue el proyecto a mi pc.

Intente ejecutar ese setup.py con torch anterior eliminado y sin torch anterior eliminado, y tira…

(base) C:UsersMIPCDesktopMATIVtuber_HPpytorch-master>python setup.py
Building wheel torch-1.9.0a0+gitUnknown
usage: setup.py (global_opts) cmd1 (cmd1_opts) (cmd2 (cmd2_opts) ...)
   or: setup.py --help (cmd1 cmd2 ...)
   or: setup.py --help-commands
   or: setup.py cmd --help

error: no commands supplied

Realmente no entiendo para que es eso…

Lo que me sigue dejando en duda es eso del compilador que pide en C++
Y respecto al CUDA Toolkit 11.1 y el NVIDIA cudDNN (en version 11.1) en teoria los podria dejar asi… como mostre que les instale mas arriba, no?

introducir la descripción de la imagen aquí

introducir la descripción de la imagen aquí

De todos modos, al no poder usar con GPU, adapte mi proyecto a CPU modificando todo lo que diga to_gpu o to_device, y andubo con CPU usando los 3 en 11.1, pero como CPU (lento, muy lento, per andubo, osea que con eso ya descarto que sea mi proyecto)

Si lo ejecuto con GPU, usando el supuesto CUDA 11.1 instalado me tira estos errores, y ahi el problema:

(base) C:UsersMIPCDesktopMATIVtuber_HPVtuberProjectAssetsTrackingBackend>python main.py
starting up on 127.0.0.1 port 65432
Using cache found in C:UsersMIPC/.cachetorchhubintel-isl_MiDaS_master
Loading weights:  None
Using cache found in C:UsersMIPC/.cachetorchhubfacebookresearch_WSL-Images_master
Using cache found in C:UsersMIPC/.cachetorchhubintel-isl_MiDaS_master
waiting for a connection
connection from ('127.0.0.1', 13676)
Connection closed
Traceback (most recent call last):
  File "main.py", line 39, in <module>
    pose_data = pose_estimator.get_pose_data(img.copy())
  File "C:UsersMIPCDesktopMATIVtuber_HPVtuberProjectAssetsTrackingBackendutilspose_estimator.py", line 74, in get_pose_data
    heatmaps, pafs, scale, pad = self.infer_fast(img)
  File "C:UsersMIPCDesktopMATIVtuber_HPVtuberProjectAssetsTrackingBackendutilspose_estimator.py", line 49, in infer_fast
    stages_output = self.net(tensor_img)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulesmodule.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:UsersMIPCDesktopMATIVtuber_HPVtuberProjectAssetsTrackingBackendemotion_modelswith_mobilenet.py", line 134, in forward
    backbone_features = self.model(x)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulesmodule.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulescontainer.py", line 119, in forward
    input = module(input)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulesmodule.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulescontainer.py", line 119, in forward
    input = module(input)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulesmodule.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulesconv.py", line 399, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:UsersMIPCanaconda3libsite-packagestorchnnmodulesconv.py", line 395, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: no kernel image is available for execution on the device
( WARN:1) global C:UsersappveyorAppDataLocalTemp1pip-req-build-kh7iq4w7opencvmodulesvideoiosrccap_msmf.cpp (434) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

Trate de describir d ela mejor manera que pude todo lo que hice haber si ustedes encuentran el error 🙁 , pero sigue sin funcionar…
Probe si la camara es correcta y opencv la detecta y da video streaming osea que un problema con la webcam esta descartado.

Aun asi sigue tirando esto…

RuntimeError: CUDA error: no kernel image is available for execution on the device
( WARN:1) global C:UsersappveyorAppDataLocalTemp1pip-req-build-kh7iq4w7opencvmodulesvideoiosrccap_msmf.cpp (434) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

Ya no se mas que hacer para hacer funcionar a pytorch en mi pc, espero realmente puedan ayudarme. Como veran trate de explicarme lo mejor posible, pero encerio que no se mas que hacerle.

plotting – HatchFilling between two data sets in LogLog scale for Mathematica 11.1

ClearAll(addHatchFilling)
addHatchFilling(mf_: Automatic, color_: Automatic, m_: Automatic) :=
   ReplaceAll(p_Polygon :> {p, 
     First(RegionPlot(DiscretizeRegion(p), 
       MeshFunctions -> (mf /. Automatic -> {# + #2 &}), 
       MeshStyle -> (style /. Automatic -> Directive(Thick, Opacity(1), White)), 
       Mesh -> (m /. Automatic -> 40), MeshShading -> None))}) @* Normal;

Examples:

{list1, list2} = {Transpose({Range @ 20, Range@20}), 
   Transpose({Range @ 20, Sqrt@Range@20})};

llp = ListLogLogPlot({list1, list2}, Joined -> True, 
  Filling -> {1 -> {2}}, ImageSize -> 400);

Row({llp, addHatchFilling()@llp}, Spacer(10))

enter image description here

Row({addHatchFilling({# &}, 
    Directive(Opacity(1), CapForm("Butt"), AbsoluteThickness(5), Orange), 30)@llp, 
  addHatchFilling({# &, #2 &}, Automatic, {20, 20}) @ llp}, Spacer(10))

enter image description here

Row({addHatchFilling({#2 &}, 
    Directive(Opacity(1), CapForm("Butt"), AbsoluteThickness(2), Red), 
     {Range(0, 20, .1)}) @ llp, 
  addHatchFilling({# + #2 &, #2 - # &}, Automatic, {20, 10})@llp}, 
 Spacer(10))

enter image description here

$Version
 "11.3.0 for Microsoft Windows (64-bit) (March 7, 2018)"

See also: Generating hatched filling using Region functionality

number formats – Why is 2s complement of 000 equal to 111, but 9s complement of 000 is not 888?

The two’s complement of 000 is 000. It is formed by complementing all bits and adding 1 to the result. The one’s complement of 000 is indeed 111, but it is not used in computing.

The ten’s complement of 000 is 000. It is formed by complementing all digits and adding 1 to the result. The nine’s complement of 000 is indeed 999.

I suggest thoroughly reading the Wikipedia article on Two’s complement.


What is behind two’s complement?

The goal of two’s complement is to come up with a negation operation $N(x)$ so that $x – y = x + N(y)$. The idea is that if all integers have width $w$, then all computation is implicitly done modulo $2^w$, and so $x – y = x + 2^w – y$. Now $2^w – y = (2^w-1-y)+1$. The binary expansion of $2^w-1$ consists of $w$ many $1$s, and so $2^w-1-y$ is the same as complementing $y$. That’s why we compute the two’s complement by complementing all bits and adding $1$.

Ten’s complement works in the same way: $x – y = x + 10^w – y$, and $10^w – y = (10^w-1-y)+1$. Now $10^w-1$ consists of $w$ many $9$‘s, and so $10^w-1-y$ corresponds to complementing all digits. Therefore ten’s complement is formed by complementing all digits and adding $1$.

macos – Big Sur 11.1: AppleScript to Automatically Change Wi-Fi Networks

So, like a user reported here on Stack Exchange, the built-in “networksetup” command in terminal is pretty unreliable at times. It’s slow, and I’ve found for some reason dot1x never actually establishes properly for some types of Wi-Fi networks in my home. The solution: Create an AppleScript to simulate mouse clicks on the menubar to change between Wi-Fi networks.

Why is networksetup so slow compared to manually changing Wi-Fi networks?

The below script has worked fine for me until Big Sur:

use application "System Events"

property process : a reference to application process "SystemUIServer"
property menu bar : a reference to menu bar 1 of my process
property menu bar item : a reference to (menu bar items of my menu bar ¬
    where the description contains "Wi-Fi")
property menu : a reference to menu 1 of my menu bar item
property menu item : a reference to menu items of my menu


to joinNetwork given name:ssid as text
    local ssid
    
    if not (my menu bar item exists) then return false
    click my menu bar item
    
    repeat until my menu exists
        delay 0.5
    end repeat
    
    set M to a reference to (my menu item where the name contains ssid)
    
    repeat 20 times --> 10 seconds @ 0.5s delay
        if M exists then exit repeat
        delay 0.5
    end repeat
    click M
end joinNetwork

joinNetwork given name:"my network ssid"

The reason why it broke is that Wi-Fi is no longer technically a direct option under the main menu bar. Instead, it’s relegated to the Control Center in Big Sur, and I think there may even be another sub-module it’s technically nested it within the UI. I’ve been reading for hours about people trying to overcome this challenge in Big Sur, for example, to automate a click on specific Bluetooth device, but many AppleScripts people wrote apparently broke in the 11.1 update, and I have no easy starting point here for how to figure out how to accomplish what I am trying to do for Wi-Fi.

Any help here would be tremendously appreciated.

Side note: I know the same user also posted a method using AppleScriptObjC, but as people pointed out, it’s a huge security risk because you need to put your password somewhere as plaintext. The UI script is therefore the better option in my mind, so I’d like to get it to work again.

keyboard – macOS Big Sur (11.1): How to summon “Unlocking with Apple Watch…” when Mac is locked but not raised from a sleep?

When I wake my Mac from a sleep, I’m always able to unlock it with my linked Watch first. If it’s unsuccessful due to any reasons, a prompt appears but in most situation everything works smoothly and I unlock without typing my password.

enter image description here


When I try to unlock my Mac just from the locked screen (not from a sleep) I always get a prompt.

The only way for me to unlock & log-in without typing password is to hit the Esc key on the lock screen to put it to sleep, hit any random key to wake it up and then I see “Unlocking with Apple Watch…”.

Is there any keyboard shortcut to unlock on demand my Mac by a Watch on locked screen?

enter image description here

hdmi – MacOs BigSur 11.1 Display Flickering

Since I have updated my Mac on the newest version of MacOS BigSur 11.1 my external monitor HP Z27n G2 starts flickering heavily in 2560 x 1440 resolution.

Now I have changed the resolution to 2048 x 1152 and it seems to be okay.

Now actually I would like to have the better resolution because the monitor can it and before the update it wasn’t a problem.

Does anyone have the same issue and have a better solution then going down in resolution?

Thanks in Advance

An inequality about unit vector orthogonal to $(1,1,…,1)$

Does there exist a constant $alpha>0$ such that the following holds?
$$liminf_{ntoinfty}inf_{xinmathbb{R}^n, sum_{i=1}^nx_i^2=1, sum_{i=1}^nx_i=0}frac{sum_{i<j, |i-j|leqfrac{n}{4}}(x_i-x_j)^2}{n}geqalpha$$
This is related to the Laplacian of a special graph with adjacency matrix $A_{ij}=1$ when $|i-j|leq n/4$ and 0 otherwise. This conjecture essentially says that the second smallest eigenvalue of the unnormalized Laplacian will grow linearly in $n$. This is definitely true if the graph is fully connected.