Sql server update via select




















The following execution plan illustrates an execution plan of the previous query. The only difference is that this query updated the 3. This query was completed within 68 seconds. We added a non-clustered index on Persons table before to update and the added index involves the PersonCityName and PersonPostCode columns as the index key.

The following execution plan is demonstrating an execution plan of the same query, but this query was completed within seconds because of the added index, unlike the first one. We have seen this obvious performance difference between the same query because of index usage on the updated columns.

As a result, if the updated columns are being used by the indexes, like this, for example, the query performance might be affected negatively. In particular, we should consider this problem if we will update a large number of rows.

To overcome this issue, we can disable or remove the index before executing the update query. On the other hand, a warning sign is seen on the Sort operator, and it indicates something does not go well for this operator. When we hover the mouse over this operator, we can see the warning details.

During the execution of the query, the query optimizer calculates a required memory consumption for the query based on the estimated row numbers and row size. However, this consumption estimation can be wrong for a variety of reasons, and if the query requires more memory than the estimation, it uses the tempdb data.

This mechanism is called a tempdb spill and causes performance loss. The reason for this: the memory always faster than the tempdb database because the tempdb database uses the disk resources.

Now, if we go back to our position, the MERGE statement can be used as an alternative method for updating data in a table with those in another table. In this method, the reference table can be thought of as a source table and the target table will be the table to be updated.

The following query can be an example of this usage method. We have typed the Persons table after the MERGE statement because it is our target table, which we want to update, and we gave Per alias to it in order to use the rest of the query. With the help of this syntax, the join condition is defined between the target and source table. In this last line of the query, we chose the manipulation method for the matched rows. Finally, we added the semicolon ; sign because the MERGE statements must end with the semicolon signs.

The major characteristic of the subquery is, they can only be executed with the external query. Should you decide to remove them, the following block of code will remove the tables and schema from your AdventureWorks database.

This is due to the Primary-Foreign key constraint between the two tables. In this scenario, the two tables are Test. Person and Test. While any of these three options listed above are excellent ways of updating multiple rows at a time with distinct data, you the user, will find yourself leaning more toward one than another. So, go with what is comfortable for you. However, you also may want to consider the performance cost of the option you choose from this SQL tutorial.

On this simple example, the performance cost is minimal regardless of which option you select. On a larger database, it could be a resource hog. BusinessEntityID , p. FirstName , p.

LastName , a. Address AS a ON p. BusinessEntityID , a. AddressID , a. City , a. Person SET Test. AddressID ; GO. PostalCode , Per. Person SET Person.

Related Articles. Delete duplicate rows with no primary key on a SQL Server table. Rolling up multiple rows into a single row and column for SQL Server data. Popular Articles. How to tell what SQL Server versions you are running.

After launching and connecting to SQL Server Management Studio, create a new login and select the database that is connected to Chartio. Download our free cloud data management ebook and learn how to manage your data stack and set up processes to get the most our of your data in your organization.

Data Tutorials. Learn the importance of a great data stack.



0コメント

  • 1000 / 1000