sql server – Maintenance Plan folder missing in SQL Managed Instance (via VM)

Accessing our new SQL Managed Instance via an Azure VM/SSMS, the Maintenance Plan folder is missing from the Object Explorer pane. Reading other posts (that all relate to other versions of SQL Server) I’ve checked I have the valid permissions:

enter image description here

Is there a way to enable maintenance plans in SQLMI?

If not (assuming that SQLMI doesn’t use maintenance plans), what is the modern day Azure equivalent to perform automated backups/deletion/index maintenance?

How to get an actual execution plan from the linked server?

There is a “main” SQL Server to which I have full access and I can connect to it via SSMS. Its version is:

Microsoft SQL Server 2016 (SP1-CU15-GDR) (KB4505221) - 13.0.4604.0 (X64) 
    Jun 15 2019 07:56:34 
    Copyright (c) Microsoft Corporation
    Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)

There is a linked server defined on the main server. The linked server is:

Microsoft SQL Azure (RTM) - 12.0.2000.8 
    Feb 20 2021 17:51:58 
    Copyright (C) 2019 Microsoft Corporation

I can’t connect directly to that remote server via SSMS, but I can run all sorts of queries against it using OpenQuery. For example, I got its version by running this:

select * from OpenQuery((LinkedServerName), 'select @@version');

There are a number of complex legacy queries that are run on the remote linked server via OpenQuery.

I wanted to analyze them and see if there are any obvious/easy ways to improve their performance. To do that I wanted to get their execution plans. If I try to get an execution plan on a local server all I get is one Remote Scan operator that takes 35 minutes without any details.

I know that there is a SET STATISTICS XML ON statement that returns an execution plan of a query.
Unfortunately, when I tried to put it into the OpenQuery, it didn’t work.

Let me explain.

I can run the actual query:

select * from OpenQuery((LinkedServerName), 
select TOP(10) * from bms.digitalbookinglinezone;

This returns me 10 rows as expected. When I uncomment the SET STATISTICS XML lines I get the following error message:

Msg 11527, Level 16, State 1, Procedure
sys.sp_describe_first_result_set, Line 1 (Batch Start Line 0) The
metadata could not be determined because statement ‘SET STATISTICS XML
ON;’ does not support metadata discovery.

At the same time I can run the following query just fine:

select * from OpenQuery((LinkedServerName), 
SELECT * FROM sys.dm_exec_describe_first_result_set
(N''select TOP(10) * from bms.digitalbookinglinezone'', null, 1);

And I’m getting valuable information about all columns in the remote table.

Is there any other “T-SQL” way of getting the execution plans?
Something that I could use via the OpenQuery?

cuda – cuFFT plan fails for a large size

I am doing a cufft on a large signal of size 2^29. However the cufftPlan1D fails to create plan and returns

CUFFT_INTERNAL_ERROR = 5, // Driver or internal cuFFT library error

This is all I could find about this error in the cuFFT docs. How do I make a plan for this size? Is there other function instead of

cufftPlan1d(plan, SIZE, CUFFT_Z2Z, 1)

this call?

I have also checked that I am not running out of memory. It is well within range. And also size is in the power of 2.

index – Indexed columns in SQL Server do not appear to work for basic queries according to execution plan

Disclaimer: I’m not a DBA. I have picked up a few things from this board in the past that I’m building from.

I have a table of google analytics session start times. I have an index on each column. I want to filter for all sessions that were started between two dates. Screenshot below shows the query, and the index.

Query text and index properties

The query runs quickly but I do not believe it’s using the index based on the Execution plan which both says that there’s a missing index and shows a table scan rather than an index scan:



Is it because of something about the way I’m searching through the datetime? If instead of looking between dates, I set it equal to a date, the execution plan shows it using the index:

Using index

But it’s not just this table or datetime. Here’s a different table with an index on a varchar column:

metadata index

And a simple query on this one also tells me I’m missing the index:

missing md index

I’m stumped.

sql server – Adding an INNER JOIN ruins query performance due to different execution plan despite updated STATISTICS and RECOMPILE, why?

In the home-page of my multi-tenant web application, Linq-to-Entities (EF6) is used to get a paged list of Calendar Events. This data is JOINed with some other tables to produce this query, below.

The tables all have composite primary keys that combine their TenantId with their own IDENTITY. I put the IDENTITY column first in clustered index because STATISTICS only looks at the first column in an index, and I understand having the most-specific column first improves performance of get-single-row queries. (If I’m wrong about this then please let me know!).

The schema is this:

CREATE TABLE dbo.Events (
    EventId     int NOT NULL IDENTITY(1,1),
    TenantId    int NOT NULL,
    CustomerId  int     NULL,
    EventTypeId int NOT NULL,
    LocationId  int NOT NULL,

    CONSTRAINT PRIMARY KEY ( EventId, TenantId ),
    CONSTRAINT FOREIGN KEY FK_Events_Tenants ( TenantId ) REFERENCES dbo.Tenants ( TenantId ),
    CONSTRAINT FOREIGN KEY FK_Events_Customers ( CustomerId, TenantId ) REFERENCES dbo.Customers ( CustomerId, TenantId ),
    CONSTRAINT FOREIGN KEY FK_Events_EventTypes ( EventTypeId, TenantId ) REFERENCES dbo.EventTypes ( EventTypeId, TenantId ),
    CONSTRAINT FOREIGN KEY FK_Events_Locations ( LocationId, TenantId ) REFERENCES dbo.Locations ( LocationId, TenantId ),

The query that Linq generates is this:

(reformatted for readability, e.g. indentation, renaming Linq’s Extent1 alias to shorter aliases based on the table’s name, removing redundant column aliases, and removing Linq’s idiosyncratic Project1, these changes did not affect the execution plan)

    dbo.Events AS e
    LEFT OUTER JOIN dbo.Customers AS c ON
        c.TenantId = e.TenantId
        c.CustomerId = e.CustomerId
    INNER JOIN dbo.Locations AS l ON
        l.TenantId = e.TenantId
        l.LocationId = e.LocationId
    INNER JOIN dbo.EventTypes AS t ON
        t.TenantId = e.TenantId
        t.TypeId = e.TypeId
    e.TenantId = @tenantId

    ROW_NUMBER() OVER ( ORDER BY e.EventId DESC, e.TenantId DESC )
    @offset ROWS FETCH NEXT @pageSize ROWS ONLY
  • The @tenantId parameter is 123.
  • The @offset parameter is 0.
  • The @pageSize parameter is 50.
  • I ran the query before and after a rebuild of all indexes, as well as updating all STATISTICS with sp_updatestats @resample = 'resample'. This had no effect on the execution plan.
  • I also ran them with OPTION(RECOMPILE). This also had no effect on the execution plan.
  • This query is being run on Azure SQL, btw.

Recently (ever since a few weeks ago) this query was running very slowly, so I got the Execution Plan for that query from SSMS:

enter image description here

  • I have circled all execution plan nodes that pass 41,109 rows around.
  • 41,109 is the number of rows in my dbo.Events table that correspond to the TenantId account I was looking at.
    • So every time I saw 41,109 in the plan I know that it was reading all dbo.Events rows – or index nodes – for that Tenant.
    • But the query has paging via OFFSET ROWS FETCH ROWS with an ORDER BY that uses the Clustered Index key columns – so it should not be reading more than 50 rows from the dbo.Events table!

I saw that the plan was reading from the dbo.EventTypes table first, and using the output of that read to filter rows from dbo.Events – but that’s not my intention. My intention is for dbo.Events to be the “primary” table, and for it to get rows from the other tables (dbo.Customers, dbo.EventTypes, and dbo.Locations) based on the rows it read from dbo.Events.

When I removed the INNER JOIN dbo.EventTypes AS t ON... part of the query (and the concordant columns) and re-ran the query, the execution plan was now what I intended it to be and the query ran very fast (except without the dbo.EventTypes data that I still need…):

    dbo.Events AS e
    LEFT OUTER JOIN dbo.Customers AS c ON
        c.TenantId = e.TenantId
        c.CustomerId = e.CustomerId
    INNER JOIN dbo.Locations AS l ON
        l.TenantId = e.TenantId
        l.LocationId = e.LocationId
    e.TenantId = @tenantId

    ROW_NUMBER() OVER ( ORDER BY e.EventId DESC, e.TenantId DESC )
    @offset ROWS FETCH NEXT @pageSize ROWS ONLY

The execution plan now shows it reading exactly 50 rows from the dbo.Events table. (Curiously, it also shows it reading from the dbo.Locations table and indexes twice – I don’t understand why).

enter image description here

The Execution Plan window does not indicate that any indexes are missing for either query. When dbo.EventTypes is used in the query then the execution-plan shows the Index Seek node on dbo.Events is using the TenantId as the only seek predicate and ignoring the EventId predicate that would be used because of the ORDER BY clause.

Unfortunately none of the online resources I’ve found that advise improving execution plans by judicious use of indexes helped me in this cause because, as far as I know, the dbo.Events table has plenty of coverage by indexes (there’s more than 15 indexes on the table) and none of them talked about how indexes are used by ORDER BY, TOP, and OFFSET. Additionally they all advised that incorrect estimated row counts (like how I see 2616 estimated but 41109 actual, or 77192 estimated but 41109 actual) would be rectified by updating STATISTICS, but I did update all statistics and that didn’t help at all.

planning – Should I plan ahead, or figure out programs as I’m writing them?

I’ll chime in with my two cents as well.

Writing out a full program on paper before you start is a “CS 101” exercise that works for Hello World.
But spewing out code without taking the time to think about what you want to achieve is a waste of time as well.

Between these two extremes, you have the full range o software methodologies, from hard Waterfall (expansive set of specifications, pretty much one output at the end) to agile (“light” specs with very fast turnaround between releases and constant “customer” dialogue).

In any case, you need to know what you are building before you can start, whatever the methodology. This will help frame the program’s architecture. Waterfall will put an emphasis on having the whole system architectured before you start coding, so that you know what you will end up with.
Agile will put an emphasis on “spec the minimum functionnality to be achieved for the next iteration”, and will admonish you to be prepared to re-arcitecture constantly (by creating suitable test suites and the like so that you can actually carry out such work).

Note that NO methodology will tell you to just start coding without a thought, and indeed, you should have a pretty good idea of what you are trying to do (be it the whole thing with Waterfall, or a smaller area with Agile). You do improvise somewhat in “HOW” you do something, not in “WHAT” you are trying to do.

I can draw a parallel with what you learn when taking acting lessons. One big part is “improv”. When starting an improvisation, you need to come in with as little set in stone as you can, and be prepared to abandon your idea if someone takes the scene in another direction. You need great listening skills, and need to accept the others’ propositions. The main thing is to come on stage with a character a situation, maybe a feeling you want to convey, and potentially an ending towards which you want to take the scene. But the furthest away the idea is in the scene, the more likely it is that you will end up somewhere else. In the end, it’s a very “Agile” way of thinking, which might be what you are struggling with here.

Now, if you are talking at the “let’s write code” level (specs, stories or whatever are available), then it’s up to you. Some people do write pseudocode, some write comments before the actual code (in a “fill in the blank fashion) and some start by writing tests (see Test Driven Development). In any case, the goal is to frame your idea so that you achieve what you set out to do in your particular method.
At that level, most people do some thinking before they write the actual code. But going so far as to write down the code before hand on paper seems overkill to me…