dnd 5e – Can the Battle Master fighter’s Precision Attack maneuver be used on a melee spell attack?

No, it can’t be used on a spell attack

Attacks are broken down into weapon attacks and spell attacks.

Each of these can be either a melee attack or a ranged attack. So any attack must be one of the following:

  • Melee Weapon Attack
  • Ranged Weapon Attack
  • Melee Spell Attack
  • Ranged Spell Attack

Except for a few rare cases, if you’re casting a spell that gives you an attack (like in the case of thorn whip), you’ll be making one of the latter-most two. Some non-spell abilities grant spell attacks, too.

In every case, the spell or ability will tell you which of the options above you’re making. In the case of thorn whip, the relevant part of the spell description says:

Make a melee spell attack against the target.

So this means you’re making a spell attack with thorn whip, not a weapon attack. This is because the caster is not physically whipping a vine at an enemy. Rather, the caster is causing vines to magically spring from the ground at the target.

The Battle Master fighter’s Precision Attack maneuver states (PHB, p. 74):

When you make a weapon attack roll against a creature, (…)

This means that it only works with attacks of the first two kinds, where the character is making a conventional attack with a some sort of weapon held in hand(s). Notably, it can be a melee or ranged attack (Precision Attack doesn’t care which); it only matters that it’s a weapon attack, and not a spell attack.

Because thorn whip involves the character making a spell attack, that character can’t use Precision Attack on that attack.

Would Precision Attack apply for a melee spell attack?

I’ve searched here and in various reference books but can’t find this specific answer. The description of the Battlemaster Precision Attack maneuver states it can be used for any weapon attack. Thorn Whip for example uses magic to create a weapon….that is not magic. So would that "weapon" attack count towards as being able to used with Precision Attack even though the attack is using a spell?

Plotting a small gaussian | Small values and dealing with Machine Precision

I’ve defined the following:

k := 1.38*10^-16
kev := 6.242*10^8
q := 4.8*10^-10
g := 1.66*10^-24
h := 6.63*10^-27

and

b = ((2^(3/2)) ((Pi)^2)*1*6*(q^2)*(((1*g*12*g)/(1*g + 12*g))^(
  1/2)) )/h

T6 := 20
T := T6*10^6
e0 := ((b k T6 *10^6)/2)^(2/3)

(CapitalDelta) := 4/(Sqrt)3 (e0 k T6 *10^6)^(1/2)

(CapitalDelta)kev = (CapitalDelta)*kev
e0kev = e0*kev
bkev = b*kev^(1/2)

Then, I want to plot these functions:

fexp1(x_) = E^(-bkev *(x*kev)^(-1/2))
fexp2(x_) = E^(-x/(k*T))
fexp3(x_) = fexp1(x)*fexp2(x)

and check that this Taylor expansion works:

fgauss(x_) = 
 Exp((-3 (bkev^2/(4 k T*kev ))^(1/3)))*
  Exp((-((x*kev - e0kev)^2/((CapitalDelta)kev/2)^2)))

which should, e.g., as expected:

Figure 10.1

This plot came from “Stellar Astrophysics notes” of Edward Brown (also it is a known approximation).

I used this line of command to Plot:

Plot({fexp1(x),fexp2(x),fexp3(x),fgauss(x)}, {x, 0, 50}, 
 PlotStyle -> {{Blue, Dashed}, {Dashed, Green}, {Thick, Red}, {Thick, 
    Black, Dashed}}, PlotRange -> Automatic, PlotTheme -> "Detailed", 
 GridLines -> {{{-1, Blue}, 0, 1}, {-1, 0, 1}}, 
 AxesLabel -> {Automatic}, Frame -> True, 
 FrameLabel -> {Style("EnergĂ­a E", FontSize -> 25, Black), 
   Style("f(E)", FontSize -> 25, Black)}, ImageSize -> Large, 
 PlotLegends -> 
  Placed(LineLegend({"","","",""}, Background -> Directive(White, Opacity(.9)), 
    LabelStyle -> {15}, LegendLayout -> {"Column", 1}), {0.35, 0.75}))

but it seems that Mathematica doesn’t like huge negative exponentials. I know I can compute this using Python but it’s a surprise to think that Mathematica can’t deal with the problem somehow. Could you help me?

App Windows – Siemens Star CCM+ 2020.3.0 (15.06.007 single precision) | NulledTeam UnderGround

Siemens Star CCM+ 2020.3.0 (15.06.007 single precision) | 3.4 Gb
The Siemens Digital Industries Software development team is pleased to announce the availability of Simcenter STAR-CCM+ 2020.3.0. This release provides increased simulation realism, enabling you to accurately model the complexity of today’s products and deliver real innovation.

What’s New in Simcenter STAR-CCM+ 2020.3
Find optimal designs using topology optimization
Adjoint-based flow topology optimization is an exciting new way to quickly find ideal designs. With an inbuilt constrained optimization method and an end-to-end workflow, you can now leverage this cutting-edge technology. This allows you to generate highly efficient, and often non-intuitive solutions to your design challenges.
Improve accuracy of process simulations coupling CFD and gPROMS
Improving chemical processes like spray-drying or mixing requires accurate modeling of complex physics and chemistry. gPROMS is a powerful modeling environment to simulate such processes. You can now easily use CFD simulations from Simcenter STAR-CCM+ 2020.3 to calibrate a multizonal gPROMS model. This improves modeling accuracy and eliminates uncertainty.
Reduce computational cost for hybrid multiphase modeling
With version 2020.3 we’ve introduced an exciting new hybrid multiphase capability. This enables you to more efficiently simulate liquid flows in applications where flow breakup is present, such as vehicle water management or spray cooling. The VOF to Lagrangian transition model resolves the formation of droplets using a Volume of Fluid approach. To track these droplets, the model automatically switches to a Lagrangian method, significantly reducing computational expense and simulation turnaround time.
Design better engine combustion systems with liquid film modeling
High fuel loading in Internal Combustion Engines may cause undesired liquid films to form on surfaces. With version 2020.3 you can now model this phenomenon with Simcenter STAR-CCM+ In-Cylinder Solution. This means better understanding of the performance impacts of high fuel loading, including emissions and pool fires, and leads to better combustion system design.
Speed up simulations of free surface flows with droplets or bubbles
Simcenter STAR-CCM+ 2020.3 also introduces the mixture multiphase large-scale interface model: a new approach to simulate free-surface flows with droplets or bubbles for applications like gear lubrication. This approach removes the need to resolve all relevant scales in the mesh, bringing faster and more accurate answers.
Improve accuracy of unsteady RANS simulations at no additional cost
Finally, the new scale-resolving hybrid turbulence model helps capture more small-scale turbulent structures in an unsteady RANS simulation. This means, for example, increased fidelity of your transient aerodynamics simulations without additional computational cost.

Simcenter STAR-CCM+ is a complete multiphysics solution for the simulation of products and designs operating under real-world conditions. Uniquely, Simcenter STAR-CCM+ brings automated design exploration and optimization to the simulation toolkit of every engineer, allowing you to efficiently explore the entire design space instead of focusing on single point design scenarios.
The additional insight gained by using Simcenter STAR-CCM+ to guide your design process ultimately leads to more innovative products that exceed customer expectations.
Siemens Digital Industries Software provides increased simulation realism with Simcenter STAR-CCM+ 2020.3, enabling you to accurately model the complexity of today’s products and deliver real innovation.
Simcenter STAR-CCM+ 2020.3 makes it even easier to solve your complex design challenges and engineer real innovation into your products.
What’s New Simcenter STAR-CCM+ 2020.3

Siemens PLM Software a business unit of the Siemens Digital Factory Division, is a leading global provider of software solutions to drive the digital transformation of industry, creating new opportunities for manufacturers to realize innovation. With headquarters in Plano, Texas, and over 140,000 customers worldwide, Siemens PLM Software works with companies of all sizes to transform the way ideas come to life, the way products are realized, and the way products and assets in operation are used and understood.

Product: Siemens Simcenter Star CCM+
Version: 2020.3.0 Build 15.06.007
Supported Architectures: x64
Website Home Page : http://mdx.plm.automation.siemens.com/
Language: multilanguage
System Requirements: PC *
Size: 3.4 Gb

Certified Platforms for Simcenter STAR-CCM+ 2020.3 Windows
Windows 10 October 2018 Update
Windows 10 May 2019 Update
Windows 10, Version 1909 November 2019 Update
Windows Server 2012 R2 Standard
Windows Server 2012 R2 HPC Pack
Windows Server 2016
Windows Server 2019
These are the recommended hardware requirements to run Simcenter STAR-CCM+ and the CAD Clients. You can improve performance by using better specifications than those listed.
– Processor: 2.4 GHz CPU with at least 4 cores per CPU (to allow client and server to work in separate spaces and to run in parallel)
– Memory: 4 GB of memory per core.
– Disk Space: 9 GB of free disk space (more space is required to save simulations).
– Graphics Card: Dedicated graphics hardware that has 3D capability, z-buffer and translucency. Minimum screen resolution of 1024×768 pixels is recommended.

Recommend Download Link Hight Speed | Please Say Thanks Keep Topic Live

numerics – Wish to compute ln(x) with millions of digits of precision fast as possible

Computing $ln(10)$ to 6 million digits of precision on my 2.5 GHz machine running Mathematica 12.1 takes about 23 seconds using the methods below. Wish to compute $ln(x)$ with much higher precision. Is this the fastest I can compute $ln(10)$ with 6 million digits on my machine?

  1. Use the built-in Log(x) function,
  2. Invert the exponential expression $e^y=x$ which results in a Newton iteration of the form:
    $$
    y_{n+1}=y_n+2frac{x-e^{y_n}}{x+e^{y_n}}
    $$
  3. Use the arithmetic-geometric expression:
    $$
    log(x)approx frac{pi}{2M(1,4/s)}-mlog(2);quad s=x 2^m>2^{p/2}
    $$

    for $p$ bits of precision.

Unfortunately, these all take about the same amount of time. The code below is for 6 million digits and the best time is about 23 seconds:

totalD = 6000000;
(*
 set up A-G mean parameters
*)
myPi = SetPrecision(Pi, totalD);
myLog2 = SetPrecision(Log(2), totalD);
pFun(x_) := Ceiling(x Log(10)/Log(2));
mFun(p_) := Ceiling(1/Log(2) (p/2 Log(2) - Log(10)));
sFun(m_) := 10 2^m;
(*
  check built-in Log:
*)
AbsoluteTiming(
 actVal = SetPrecision(Log(10), totalD);
 )
(*
 check arithmetic-geometric mean approach
*)
AbsoluteTiming(
 pVal = pFun(totalD);
 mVal = mFun(pVal);
 sVal = sFun(mVal);
 denom = SetPrecision(ArithmeticGeometricMean(1, 4/sVal), totalD);
 myVal = SetPrecision(myPi/(2 denom) - mVal myLog2, totalD);
 )
(*
  check just the quotient expression of the inverted exp expression
*)
y0 = 23/10;
AbsoluteTiming(
 SetPrecision((
   10 - Exp(y0))/(10 + Exp(y0)), totalD);
 )

{23.911, Null}

{22.7023, Null}

{50.6573, Null}

machine precision – Avoid MachinePrecision numbers in Message

Can I change the output of Message to not show MachinePrecision numbers? E.g.

Bleh::test = "test `1`";

This is tolerable:

Message(Bleh::test, 0.01)
(* Bleh::test -- test 0.01` *)

This is not:

xx = 0.01;
Do(xx += 0.04, {5})
Message(Bleh::test, xx)
(* Bleh::test -- test 0.21000000000000002` *)

Is there a way for Message to display this rounded off as:

xx
(* 0.21 *)

numerical integration – Improvement of code precision using NDSolve for Differential-Algebraic equation

I’m trying to solve a system of 24 non-linear Differential-Algebraic equations (DAE). I’m using the command NDSolve in Mathematica to solve this system, using this command, the error is too much large. I want to improve the precision of the code, for this I was trying different methods in NDSolve command. But, Mathematica is unable to solve. I’m getting the error:

NDSolve::nodae: The method NDSolve`FixedStep is not currently implemented to solve differential-algebraic equations. Use Method -> Automatic instead.

I want to use the Implicit-Runge-Kutta method or projection method to improve my results.

If I used these methods in a system of ODE’s in NDSolve command, mathematica is able to give output.

Just as an example to test the code, I’m posting here some short example:

NDSolve({x'(t) == -y(t), y'(t) == x(t), x(0) == 0.1, y(0) == 0}, {x, 
  y}, {t, 0, 100}, 
 Method -> {"FixedStep", 
   Method -> {"ImplicitRungeKutta", "DifferenceOrder" -> 10, 
     "ImplicitSolver" -> {"Newton", AccuracyGoal -> MachinePrecision, 
       PrecisionGoal -> MachinePrecision, 
       "IterationSafetyFactor" -> 1}}}, StartingStepSize -> 1/10)

I’m able to obtain the output of the above system using Implicit-Runge-Kutta method, but If I use DAE system, I’m not able to get output, for example:

NDSolve({x'(t) - y(t) == Sin(t), x(t) + y(t) == 1, x(0) == 0}, {x, 
  y}, {t, 0, 10}, 
 Method -> {"FixedStep", 
   Method -> {"ImplicitRungeKutta", "DifferenceOrder" -> 10, 
     "ImplicitSolver" -> {"Newton", AccuracyGoal -> 15, 
       PrecisionGoal -> 50, "IterationSafetyFactor" -> 1}}}, 
 StartingStepSize -> 1/10)

Can anyone help me please, how can I solve such a DAE system with NDSolve command using some implicit method, like Implicit-Runge-Kutta method?

Should I convert this DAE system into ODE’s, if yes then how can we convert such a system into a system of ordinary differential equations?

numerical value – Export high precision data to HDF5

Recently, I would like to use HDF5 instead of CSV because of times Import taking.
Especially, I have to manage large data with high precision (about 50 degits) and it takes very long time to export such data to CSV. Therefore, HDF5 seems nice to me.

However, when I do Import HDF5 file, I only get data with MachinePrecision.
On the other hand, CSV returns data with exact precision which I exported.

Here is a simple sample:

In(1)= data = N(Pi, 50);
In(2)= data // Precision
Out(2)= 50.

Then,

In(3)= Export("testing_precision_HDF5.h5", {data});
In(4)= Export("testing_precision_csv.csv", data);

Finally I do Import and evaluate their Precision;

In(5)= Import("testing_precision_HDF5.h5", {"Datasets", 
    "/Dataset1"})((1)) // Precision
Out(5)= MachinePrecision

On the other hand,

In(6)= Import("testing_precision_csv.csv") // Precision
Out(6)= 49.4971

So, where does this difference come from? Is the reason why HDF5 takes short time to Import HDF5 cannot handle high precision data?

Please tell me how to manage high precision data with HDF5 & Mathematica.

Update: I don’t think it is a problem with ”’Import”’ because I checked the .h5 file with HDFview, but it was same result.

Bootcamp precision trackpad drivers, Windows thinks I have a precision touchpad, gesture programs don’t, what’s different?

Installing and setting up my bootcamp, running nice for an ancient macbook air, I installed these drivers from github https://github.com/imbushuo/mac-precision-touchpad, the guy did a good job the cursor is a lot smoother than the clunkier and pricey trackpad++.

They emulate precision windows touchpads with the superior (imo) macbook trackpad, it seems to fool windows because the entire precision control panel is there and windows thinks its installed.

When I tried to use it on some gesture programs I use on my windows notebook (multiswipe and gesturesign) they won’t pick up any gestures, either it’s my macbook air 2012 that is the problem or they generally don’t work with gesture programs, but windows thinks precision drivers are installed so I don’t know what the difference would be.

I bothered/messaged dev but it’s just a side project of his, seems busy, don’t know if he’ll get back to me, I really hate my heavy and large windows notebook, maybe a magic trackpad or trying to get someone to help me fork with his source code could work?

Most people probably don’t care but if you do almost everything through trackpad in a mac gesture program, the windows gesture programs come close to being as good, even better in some ways like drawing on the trackpad whatever you want which bettertouchtool can’t do.