php – Adjusting a shortcode to use a custom fields data

On my WordPress website I use a shortcode in my functions.php that displays someone’s age based on their date of birth. The shortcode was taken from this blog post. This is the code:

function beliefmedia_determine_age($atts, $content = null) {
  extract( shortcode_atts( array(
    'dob' => '', /* See post for date formats */
    'date' => 0,
    'dateformat' => 'jS F Y' /* http://php.net/manual/en/function.date.php */
  ), $atts ) );
 
 if ($dob == '') $dob = $content;
 $age = ($content == null) ? floor((time() - strtotime($dob)) / 31556926) : floor((time() - strtotime($content)) / 31556926);
 return ($date) ? date($dateformat, strtotime($dob)) . ' (age: ' .  $age . ')' : $age;
}
add_shortcode('age', 'beliefmedia_determine_age');

It works fine, but I’m trying to edit it, so that the “$dob” is taken from an ACF “date picker” custom field, named “date_of_birth”.

From my research, I believe I need to use “get_field” somehow, but my very limited PHP knowledge is causing me to become stuck with how I incorporate this in to the shortcode above.

Any help would be massively appreciated.

project online rest api date fields

That’s probably the raw milliseconds.

Dates are actually stored as numbers that represent ticks of a certain size from a certain starting point.

From the MDN documentation on JavaScript dates:

A JavaScript date is fundamentally specified as the number of
milliseconds that have elapsed since midnight on January 1, 1970, UTC.

And it then goes on to point out the difference between JavaScript dates and UNIX time stamps, and that in UNIX the tick is a full second (not millisecond):

This date and time are not the same as the UNIX epoch (the number of
seconds that have elapsed since midnight on January 1, 1970, UTC),
which is the predominant base value for computer-recorded date and
time values.

And, in the C# DateTime struct, the ticks are 100 nanoseconds, and are counted since 12:00 midnight on Jan 1, 0001 AD.

So there are some differences. Why would I assume the ones coming from the Project Online REST API are in milliseconds?

Well, for one, Date(<something>) looks a heck of a lot like the JavaScript Date constructor. And, in addition to using an ISO string ("2021-02-10T08:00:00.000Z") or a short date format string ("2/10/2021"), you can use a number representing milliseconds since Jan 1 1970 in the Date constructor.

Second, I am also assuming that you are sending a header of Accept: application/json along with your REST request, which is telling Project Online that you want your response in JavaScript Object Notation, which is a pretty big clue to Project Online that you are using JavaScript, so it might format a response it’s going to send in JSON in a JavaScript-friendly way.

And third, it’s Project Online, and a heck of a lot of modern development, especially revolving around using cloud services/APIs, is very client-side oriented, meaning, done in JavaScript. So they could be leaning heavily on that expectation as well.

That’s all speculation though. I really am just guessing that it’s milliseconds (based on those hunches).

A quick test to see if that’s right? Open up a browser console, and just try it out:

// doing this
new Date(1612944000000)

// outputs
Wed Feb 10 2021 03:00:00 GMT-0500 (Eastern Standard Time)

At least, that’s what I get in my time zone. Does that date and time make sense for what you are expecting?

As far as why there is a difference between how REST API date results are formatted for the Project Online API vs. the SharePoint API, I guess that’s a question for the Microsoft development teams…

(As a side note – I haven’t worked with Project Online. I have worked with Project Server 2016 on-prem, and results from the /ProjectData/ REST API in that version do come back as ISO strings, same as from SharePoint, at least if you are looking at things like TaskFinishDate, TaskBaselineFinishDate, etc..)

forms – Should closing the dialog clear its fields?

I identified as the most preferrable strategy for user not to lose their data implementation of confirmation/discard dialog when exiting original one.

enter image description here

enter image description here

I expect a few more form dialogs to emerge in the app though, for example editing of the profile.

enter image description here

Nielsen Norman Group’s research supports the use of confirmation dialogs, but they also point out not to overuse them, so they do not lose significance (see https://www.nngroup.com/articles/confirmation-dialog/). I feel the same, I don’t want to add Confirmation/Discard dialog everywhere. I believe if ‘post’ is going to be the most significant part of the application, it makes sense to put a confirmation/discard dialog there, but I feel less positive about putting it also on profile editing.

My question therefore is, assuming that we do not want to use confirmation/discard dialogs everywhere in the app, what’s the preferred strategy for the rest of the forms/dialogs?

This is what I can’t decide about:

  1. User can close dialog accidentally, open it again, but realizing that they lost their editing and data – getting annoyed.
  2. User can close the dialog as an act of discarding, but after a moment opening it again, expecting original data, but seeing the last uncommitted changes from the previous editing – getting confused.

What could be the ratio of these two scenarios occurring?

I noticed that most of the big web platforms do not preserve changes in their secondary (in terms of importance) dialogs. I wonder if it is based on a UX decision or if it makes just more sense in their scaled implementation.

For more context: Esc, Click on overlay/backdrop, and Close button (that I am yet to put everywhere) do close the dialogs.

sql server – SP2007 (All)UserData nvarchar field content garbled for choice fields

In our Sharepoint 2007, one of our lists field is configured as such:

<Field Type="Choice" DisplayName="Standard" Required="FALSE" Format="RadioButtons" FillInChoice="FALSE" Group="gc_xyz" ID="{e5d39160-a777-4d70-b372-a7ca76305adc}" SourceID="{21f217b9-cbc5-44b8-96b7-2c665aecc37f}" StaticName="Standard" Name="Standard" ColName="nvarchar20" RowOrdinal="0"> <CHOICES> <CHOICE>Yes</CHOICE> <CHOICE>No</CHOICE> </CHOICES> </Field>

But when I look in the AllUserData table (or its view), the data for this field is like this:
| nvarchar20 |
|————|
|챂䗅⎑啄獌崿|
|ំ싖줚䎭권䞢⋫|
|嫎⚔潣俎즤떴ಇ긅|
|စ꼨噡䊔ꆫ䐂㪗⋉|
|ᶷ刊ᯉ䯥梋蓊㯕Ꙃ|
|㩝䪿撾럌阶紻|
|ពု帵䙏熦༱䏇䶌|
|왜汵䅩粁ʹ猅|

All values are different, as if hashed. How do I read those values to translate them to Yes/No?

Mandatory Fields Error on Manual order pay page – woocommerce

I’m facing issues with required mandatory fields on the manually generated orders page (payment link through email).

WooCommerce scenario: an order is created by admin. This order’s payment link is sent by email to the customer to pay.

Link like this:

https://asad.app/checkout/order-pay/3566/?pay_for_order=true&key=wc_order_PIBMN20X3Jank

The problem is that when I’m trying to proceed with the payment it says

"Sorry, there was an error: The field Address is mandatory., The field City Name is mandatory."

My question is, there are no user entry fields on this page then why it is showing mandatory fields required. FYI this is working fine on chrome browsers(desktop only) but when the same link is accessed from chrome mobile or safari the error shows up. above give link is the working link, kindly assist me here.

I’m not using any kind of plugin to manage checkout fields.
This feels strange as there are no fields on the page but it still shows the required fields error.

forms – Validation for if all fields are required when an optional field has a value

I have a form that has an optional username/password input but when either the username or password has a value it causes them both to be required. Not sure how I should approach this?
So far the validation looks a little wordy:
enter image description here

I considered this kind of validation, but it gives an either/or impression:

enter image description here

sql server – Does the length of the fields play a role in index access in MS-SQL?

Last week I had already started two threads dealing with slow SQL selects that probably did not use the index.

Today I noticed the following: The customer and the material are involved in almost all index accesses.

Customer and material are nvarchar fields everywhere, and the customer and material numbers are the same in all tables.

Only the length of the nvarchar fields is different in different tables.
The customer sometimes consists of an nvarchar (10), sometimes of an nvarchar (20) and sometimes of an nvarchar (30).
This is because these tables were created by external consultants who each used a different length for the customer-fields.

However, the customers are only seven characters everywhere in all tables.

Could that be a reason that the index access is not working?

Does the nvarchar length play a role when linking different tables via fields for which an index exists?

8 – How to migrate content (fields and paragraphs) into another content type?

My problem is following:

Initial situation

At my Drupal 8 site I have a node content type (let’s calling it node content type A). node content type A has normal (core) fields included and also holds a field with paragraphs items.

Problem/proposed solution

Now I have to change my data model. Due to it’s not adviced to change node content type machine name in a Drupal 8 site, I should go another way:

  1. Clone the content type (would use Entity type clone module for this step). Let’s calling it node content type B
  2. Clone all already existing content of node content type A into new nodes of node content type B
  3. Modify each content/content type as needed by the requirements of the new data model.

Question

How can I perform step 2, especially with the existing paragraphs items?

Thanks in advance for help and/or alternative ways.

ag.algebraic geometry – Algebraization of vector bundles over non-algebraically closed fields

I’ve asked this question here but never got an answer, a simplified version of the question is the following:

Given an ample divisor on a smooth projective variety over a finite field, is the category of vector bundles defined on a neighborhood of the divisor equivalent to the category of vector bundles on the formal neighborhood of the ample divisor? (This is true if we work over an algebraically closed field)

Tits reductive groups over local fields, 1.15/3.11. Problem with affine root subgroups of $SU_3$ ramified, residue characteristic p=2

Let $L/K$ be ramified quadratic extension of local fields, and let characteristic of the residue field of $K$ be $2$. Let $mathbb{G}=SU_3$, $G=mathbb{G}(K)$. Let $text{val}$ be a valuation on $K$ so that $text{val}(K^times) = mathbb{Z}$ (and $text{val}(L^times) = frac{1}{2}mathbb{Z}$).

Following Tits 1.15 and 3.11, I have been trying to work out the parahoric subgroups of $G$ attached to the special vertices $nu_0$ and $nu_1$ in the building of $G$.

Firstly, I’ll start with a description of the root subgroups of $G$. I’m using a slightly different notation from Tits’. Let $$u_+(c,d) = begin{pmatrix} 1 & -bar{c} & d \ 0 & 1 & c \ 0 & 0 & 1 end{pmatrix},$$
with $bar{c}c+d+bar{d}=0$.
Similarly, $$u_-(c,d) = begin{pmatrix} 1 & 0 & 0 \ c & 1 & 0 \ d & -bar{c} & 1 end{pmatrix},$$
with $bar{c}c+d+bar{d}=0$.

We have the root subgroups $U_{pm a}(K) = { u_pm(c,d) text{ : } c,d in L }$ and $U_{pm 2a} = { u_pm(0,d) text{ : } d in L}$.

Tits later defines $delta = sup{text{val}(d) text{ : } d in L, , bar{d}+d+1=0}$. $delta=0$ in the unramified case and in the ramified, residue characteristic $pneq 2$ case. However, when $L/K$ is ramified with residue characteristic $2$, $delta$ is strictly negative.

From here, Tits finds the set of affine roots of $G$ as $$Big{pm a + frac{1}{2}mathbb{Z} +frac{delta}{2}Big} cup Big{pm 2a +mathbb{Z}+ frac{1}{2} + delta Big}.$$

Affine root subgroups are given by $$U_{pm a + gamma/2} = { u_pm(c,d) text{ : } text{val}(d) geq gamma},$$
$$U_{pm 2a+ gamma} = { u_pm(0,d) text{ : } text{val}(d) geq gamma}.$$

The special points $nu_0$ and $nu_1$ i the standard apartment are defined by $$a(nu_1)=frac{delta}{2}, , a(nu_0) = frac{delta}{2} + frac{1}{4}.$$

From here, one can find that $$G_{nu_1} = langle T_0, U_{a-frac{delta}{2}}, U_{-a+frac{delta}{2}}, U_{2a+frac{1}{2}-delta}, U_{-2a+frac{1}{2}+delta} rangle,$$
$$G_{nu_0} = langle T_0, U_{a-frac{delta}{2}}, U_{-a+frac{1}{2}+frac{delta}{2}}, U_{2a-frac{1}{2}-delta}, U_{-2a+frac{1}{2}+delta} rangle.$$

In 3.11, Tits takes a $lambda in L$ with $text{val}(lambda) = delta$, satisfying $lambda+bar{lambda}+1=0$ in a way such that $lambda varpi_L + overline{(lambda varpi_L)}=0$ for some uniformizer $varpi_L$ of the ring of integers $mathcal{O}_L$ of $L$.

In 3.11, Tits defines the lattices $$Lambda_{nu_1} = mathcal{O}_L oplus mathcal{O}_L oplus lambdamathcal{O}_L,$$
$$Lambda_{nu_0} = varpi_L^{-1}mathcal{O}_L oplus mathcal{O}_L oplus lambdamathcal{O}_L.$$ Let $P_{nu_1}$ and $P_{nu_0}$ be their respective stabilizers.
Tits then states that $G_{nu_i} = P_{nu_i} cap G_{nu_i}$ for $i=0,1$.

Here’s where my problem comes in.

Consider $G_{nu_1} = langle T_0, U_{a-frac{delta}{2}}, U_{-a+frac{delta}{2}}, U_{2a+frac{1}{2}-delta}, U_{-2a+frac{1}{2}+delta} rangle.$ The stabilizer of the lattice $Lambda_{nu_1}$ in $GL_3(L)$ has the form
$$begin{pmatrix} mathcal{O}_L & mathcal{O}_L & mathfrak{p}_L^{-2delta} \ mathcal{O}_L & mathcal{O}_L & mathfrak{p}_L^{-2delta} \ mathfrak{p}_L^{2delta} & mathfrak{p}_L^{2delta} & mathcal{O}_L end{pmatrix}.$$
Since $text{val}(delta) < 0$, intersecting this stabilizer with $G$ would give us a matrix roughly looking like
$$begin{pmatrix} mathcal{O}_L & mathfrak{p}_L^{-2delta} & mathfrak{p}_L^{-2delta} \ mathcal{O}_L & mathcal{O}_L & mathfrak{p}_L^{-2delta} \ mathfrak{p}_L^{2delta} & mathcal{O}_L & mathcal{O}_L end{pmatrix},$$

Presumeably, this would tell us that $$U_{a-frac{delta}{2}} = { u_+(c,d) text{ : } c,d in L, , text{val}(d) geq -delta textbf{ and } text{val}(c) geq -delta },$$
$$U_{-a+frac{delta}{2}} = {u_{-}(c,d) text{ : } c,d in L, , text{val}(d) geq delta textbf{ and } text{val}(c) geq 0 }.$$
Normally, one would expect that if $text{val}(d) = gamma$, then $text{val}(d) = frac{gamma}{2}$ or $frac{gamma}{2}+frac{1}{4}$, as whether $gamma in mathbb{Z}$ or just $frac{1}{2}mathbb{Z}$.

I cannot work out algebraically why we have these improved bounds on the valuation of $c$ for these affine root subgroups. I assume it involves some manipulation with $lambda$, but I am not making any progress.

Thank you