SQL Server Database Development

Data Table Fundamentals

Create table and insert data

 

Add column to existing table

 

Creating schemas, transfer tables between two schemas

 

System tables and System schemas

 

Tables with calculated columns

 

Add calculated column to table

Use calculated column as primary key

 

Implement Advanced Table Features

Partition large data tables

Partition table: 

Tables that have been split horizontally and have records spread across multiple filegroups.

Benefits of partition table:

  • By spreading a single table across several filegroups, that, themselves, are stored on separate hard drives, you can increase the performance of reads and writes on the tables, since those operations can be performed simultaneously across partitions.
  • Further, it increases the performance of other maintenance tasks, since the single partition can be modified without locking the entire table.

Four steps to creating a partitions table:

  1. Create Filegroup: The physical location on disk where the records will be stored
  2. Create Partition Function: Evaluates the data in a table and segments the data based on rules specified in the function
  3. Create Partition Scheme: Maps the segments from the function to the physical filegroups for storage
  4. Create the Table: Create as normal, but specify the partition Scheme as the storage location

1. Creating Filegroup: walkthrough

2. Create Partition Function

2015, 2016, 2017 are boundaries for where the table will be split.

This will create four partitions:

  1. 2015 and ealier
  2. 2016
  3. 2017 
  4. 2018 and beyond

It will place values based on the year stored in a small integer data type.

You must always have one extra filegroup more than the number of boundary values when you’re creating a partition.

We created four filegroups: FG_2015, FG_2016, FG_2017, FG_2018, but we have only three boundary values: 2015, 2016, 2017.

3. Create Partition Scheme

4. Create table within partition scheme

Test inserting data into partition table

 

Create temporary tables

Temporary tables are tables that exist temporarily on the SQL Server. The temporary tables are useful for storing the immediate result sets that are accessed multiple times.

Local temporary tables have the single hash symbol, just like the one we created.

  • These can be used by the current user during the current session only.
  • They’re automatically deleted when the user logs off from the server instance.

Global temporary tables have two hash symbols or a double-hash as the first characters of their name.

  • They’re actually visible to any user after they’re created, not just the current user, but they also get deleted when all users referencing the table disconnect from their instances of SQL Server.

Use table variables

When you only need a table for a very brief moment then consider using a table variable.
Table variables get created, used, and destroyed all in a single batch operation. It’s not possible to use them across multiple calls to the database.

Table variables are scoped to the batch, which means that every step must be performed at once and as soon as the batch of tasks is complete, the table variable is removed from the system.

 

Understand column collation

Collation controls the sort order of items in a database, as well as the case sensitivity of the system. 

If the collation is case sensitive, then the uppercase letters are treated differently than the lowercase letters. 

Objects can also be set to accent sensitive, in which case letters with accent characters will sort in a specific order. 

Collation can be set at any level, at the server level, the database, the table, and the column level.

SQL_Latin1_General_CP1_CI_AS

breaks up into interesting parts:

  1. latin1 makes the server treat strings using charset latin 1, basically ascii
  2. CP1 stands for Code Page 1252
  3. CI case insensitive comparisons so ‘ABC’ would equal ‘abc’
  4. AS accent sensitive, so ‘ü’ does not equal ‘u’

Sample queries

 

Data compression fundamentals

Data compression takes advantage of CPU speed to reduce the size of a data table in order to move it across the network faster.

  • Only available on SQL Server Enterprise or Developer
  • More beneficial on tables with large numbers of transactions
  • Some data structures and data types compress more easily

Benefits of Compression:

  • Reduce memory footprint of the system
  • Database backups are smaller on disk
  • Database restore take less time
  • Scanning tables is often faster when performing queries

Two types of data compression: Row Level and Page Level

Row-Level Compression:

Converts fixed-length strings and numbers into variable-length data types.

Page-Level Compression:

Applies row-level compression, then adds prefix and dictionary compression techniques.

Prefix Compression:

  • Identifies instances where data values share the same initial characters or numbers.
  • Stores a model string within the header of the table
  • Compares values in the table with the string and notes how many characters they have in common.

View post on imgur.com

Dictionary Compression

Replaces common sequences of characters across the table with placeholder tokens. Applied to all of the records in a single page on disk.

Data compression is seamless and transparent to the end-user.

Compress data tables

 

Go back in time with temporal tables

Temporal Tables seamlessly copy and time-stamp records into a history table any time there’s an insert, update or delete.

 

Work with Database Views

What are views?

Views are virtual tables that don’t typically store data, just instructions via a saved SELECT statement for how to process tables and columns to return a set of records that match the requirements of the user.

Considerations when creating views

  • Use field aliases by adding the AS keyword to rename columns
  • Avoid the WHERE clause. Let the end-user supply criteria.
  • Leverage schemas to aid organization and security (create view inside a schema)
  • Never use SELECT * in a view definition.

Most views don’t store data but Indexed Views do.

Create a view of the data

 

Enhance view performance

Views typically act as virtual tables in that they don’t store any values, only the instructions for how to gather and assemble the requested information from the underlying source data tables. Every time you query a view, SQL Server goes back to the data tables and reprocesses the information in order to display the results.
While this process is efficient from a data storage perspective, it can sometimes be a slow process to search through all of those underlying tables every time.

Index views actually store or persist their own copy of the data to disk. Every time a change is made to one of the underlying tables, SQL Server updates the index view’s copy so that it’s always up-to-date.

By doing so, the view is essentially pre-baked and ready to serve at a moment’s notice.

The query optimizer automatically searches out opportunities to speed up views by creating indexes; but you can add indexes to any view. To create an indexed view, it must include something called schemabinding. Schemabinding locks the view to the underlying data tables and won’t allow the tables to be modified in a way that breaks the view, such as if a column was deleted or renamed.

 

Ensure Data Integrity through Constraints

Understand data integrity

Enforcing data integrity will require many different techniques depending on the business rules that need to be supported.

Entity Integrity

  • Each item in the database is represented by a single record
  • Controlled using UNIQUE constraints on a field

Referential Integrity

  • Controls and maintains the relationships between data tables
  • Defines what happens to related records when primary keys are modified or deleted

Implement domain integrity

Domain Integrity concerned with the validity of data at the column level of a table. Domain integrity checks values against various business rules and other criteria that have been established for a field.

Integrity through Data Types

  • Specifies what will and will not be allowed in a column
  • Highly targeted data types limit the possibilities for data entry mistakes

Integrity through Check Constraints

  • Limit the range of allowable values within the data type
  • Performs a validity check during table inserts

Integrity through Default Constraints

  • Provides a default value when one is not supplied during an insert to the table
  • Default values can be constants or the result of a function

Integrity through Foreign Key

  • Compares values to a list in a related primary key field
  • Acts as a lookup validation rule

Integrity through NULL values

  • Column definitions indicate NULL or NOT NULL
  • NOT NULL columns will force entry of a value that meets all other domain integrity checks and validation

Enforce referential integrity

Referential Integrity checks for a related record in a parent table when storing values in a child table.

Referential Integrity controls changes to data as well.

Enforcing referential integrity protects the validity of the connections between tables in the database, and while the check constraint between a foreign key and a primary key is created automatically on a relationship, you still need to consider how best to propagate changes across related tables with cascade updates or cascade deletes.

Use DML triggers to verify data

Data Modification Language triggers, or DML triggers for short, are yet another tool in the data integrity toolkit.

With DML triggers, you can fire off a programmatic response to an INSERT, UPDATE, MERGE, or DELETE statement when working with data in a table.

The nice thing about them, is that they can reach out to other objects in the database and process a more complex logical routine than the other data integrity techniques that we’ve seen.

Also, instead of simply allowing or rejecting a proposed value, they can modify data on their own to keep everything in sync.

 

Create a Secured User Account

Define user roles and permissions

Collectively all the databases, tables, and views that can contain proprietary information are called securables.

Database administrator such as the DBO user account has full access to everything and everyone in the database.

Managing Permissions

  • Permissions are assigned to individual users or roles
  • Issued by GRANT, REVOKE, or DENY statements
  • Users may gain permissions through multiple security paths
  • GRANT gives access to the securable
  • REVOKE removes access to the securable
  • Users with both GRANT and REVOKE permissions will be given access to the securable
  • Users with a DENY permission will not have access to the securable, even if a GRANT is issued through another path

Table Permissions

  • SELECT: view records and perform queries
  • INSERT: add new records to the table
  • UPDATE: modify existing records
  • DELETE: remove data from the table

Database Permissions

  • CREATE TABLE
  • CREATE VIEW
  • BACKUP DATABASE

User Roles

  • Job descriptions for a group of people
  • Individual users can be members of multiple roles
  • All users belong to the Public role
  • Roles can be assigned permission, and those permission propagate down to the members

Add a user to a role with T-SQL

 

 

Create loginless and contained users

Implement schema security

 

Implement Data Types

Store partial data

Add GUID data to a table

Implement the XML data type

Store BLOB data in a table

Implement FILESTREAM access

Implicit and explicit data conversions

 

Create Stored Procedures

Introduction to stored procedures

Write a stored procedure with T-SQL

Add parameters to a stored procedure

Recompile a stored procedure

 

Create In-Memory Tables

Why use in-memory tables?

Create a memory-optimized table

Write a native compiled stored procedure

 

Indexing

Table index fundamentals

Clustered indexes

Non-clustered indexes

Single column and composite indexes

Design spatial and XML indexes

 

Index Optimization

Improve queries with covering indexes

Obtain index statistics

Filtered indexes

Create columnstore indexes

 

Work with Sparse Columns

Introduction to sparse columns

Implement sparse columns

 

Optimize Query Performance

Defragment indexes

Analyze execution plans and tune queries

Leverage the SQL Tuning Advisor

 

Design and Implement User-Defined Functions (UDF)

SQL Server function overview

Return values with scalar functions

Create a table-valued function

 

Monitor Database Performance

Use Dynamic Management Views

Record extended event sessions

Find bottlenecks with Activity Monitor

 

Leave a Comment