Visual Studio and SQL Data Developer Launch Deck

horsescurrishInternet and Web Development

Jul 30, 2012 (4 years and 10 months ago)



Visual Studio and SQL Data Developer Launch Deck

Talk Track

Slide 1


Title Slide:

Today we are here to talk about
the possibilities that exist when building applications on SQL Server
Visual Studio
Visual Studio and SQL Server provide an unparalleled experience for developing
applications of all sizes for every type of organization that span on
premises and the cloud. With Visual
Studio and SQL Server, developers will be able to create compelling ap
plications that deliver business
value sooner while maintaining the quality and integrity of your IT assets.

Slide 2

Presentation Objective

The intent of this deck
to be a 200
level lap around of the data developer story for VS + SQL to provide
etter insight into the capabilities
that now exist.
The proof points mentioned in this deck are beyond
simply a new capability in the tool. In other words, if value will accrue to Visual Studio by being able to
build more interesting applications (e.g. S
treamInsight) then it is included in this deck.

Since this is a
level deck, you will not find demos or
code examples. Deep
dive training kits from DPE
have been created and I encourage you to explore those resources.

Not everything in this

deck will be
new to VS 2010 and SQL 2008 R2. The goal is to provide an update on the better together story for
developers that incorporates the latest features and offerings while refreshing concepts that have
existed for a period of time.


Visual Studio + Data Development

It is important to consider the significant changes that have taken place
in the .NET Framework and
Visual Studio over the past couple of years with respect to data development.
We have made it easier
to build data

apps spanning from the data to the middle
tier all the way to the presentation
layer. Whatever your needs may be, you can build applications for at all layers and build on those
investments over time.
In addition there are three unifying themes that uni
quely differentiate our
offerings. We are integrating the data development experience across these three tiers through our
ALM offerings, open data initiatives, and
the cloud with
Windows Azure, SQL Azure, and AppFabric,
which we will touch on later in th
e presentation. Let’s first drill into the three
tiers of the architecture:



Bringing the tooling experience for data developers closer to that of managed code
developers is an area in which we have made significant investments.
We have made numerous
investments in Visual Studio that have made

a core integrated part of the

In addition, you need the tooling and data platform capabilities that span departmental
and enterprise applications. Finally, building

applications that

large volumes of data in
near real
time is a critical success factor in a number of industries and businesses today.

SQL Server StreamInsight, we are providing the data platform capabilities that enable you use
the tools yo
u know, Visual Studio and .NET to build impactful applications that will lead to better
decision making capabilities.


We have m
ade numerous improvements in entity framework (EF) to enhance the
data development experience when writing
data access layer code.
We have also made the

customization of code generation a lot simpler with the Text Template Transformation Toolkit.
Supporting rapid data
centric application development is imperative and we have made
investments in data binding t
o enable paradigm for developers.



The presentation layer is where you

application meets the customer. From
websites to reporting we have enhanced Visual Studio to allow you to more quickly build out the
presentation layer. We have provid
ed the ability to rapidly build websites with Dynamic Data
containing flexible templates and scaffolding that provide rich data access and table support. In
addition, there have been improvements in the report view controller to provide a smoother
end use
r experience.


ALM: Quality and Agility

Now let’s dive into the details about data development with Visual Studio on SQL Server

by first looking
at the
database side
Visual Studio unlocks the power of the .NET framework and SQL Server in an
ntegrated experience spanning the
presentation and
data access layer

and business logic.





Rich T
SQL Editor

Bringing the tooling experience for stored procedure developers closer to that of managed code
developers is an area in which we have made significant investments. Principally, we use the following
three items as guiding principles:

class during d


Based on understanding of deployment intent

We will have more on deployment when we
talk about the platform capabilities.

Let’s take a quick look at how we are following those guiding principles for the stored procedure

SQL IntelliSense in Visual Studio 2010

The first thing to point out is this functionality is only available in Visual Studio 2010 Professional,
Premium and Ultimate SKUs.
Also, t
here are actually two implementations of T
SQL IntelliSense in Visual
tudio: “Connected” and “Offline.”

Connected IntelliSense

Connected IntelliSense is active whenever you are in a T
SQL editor and are connected to a SQL 2008
instance. Note that SQL 2005 is not supported. The identifiers that are listed in the completion
list are
enumerated by querying the database that is currently selected in the editor’s toolbar. All SQL Server
2008 types are supported by Connected IntelliSense.


Offline IntelliSense

Offline IntelliSense is active whenever you are in a T
SQL editor and a
re editing a .sql file that is part of a
SQL Database, SQL Server or Data
tier Application Component project. The identifiers that are listed in
the completion list are enumerated by querying the in
memory model representing the contents of your
project. T
his in
memory model contains all the objects in your project that are defined and saved to
disk. A subset of the SQL Server types are supported by Offline IntelliSense

Inline table function


Multi Statement Table function

Scalar Valued function


Stored Procedure



User Defined Data Type

User Defined Table Type


Unlike C# or Visual Basic IntelliSense, T
SQL Offline IntelliSense does not support syntax auto
completion. For example, after type CREATE TABLE Foo, IntelliSense wi
ll not auto
complete the
remainder of the syntax for defining a table.

Improved Debugging in Visual Studio
: ShowPlan, data tips, the call stack, watch variables, and can set
breakpoints within Visual Studio when debugging stored procedures.

When writing st
ored procedures you now have the ability to view the execution plan and client stats
within VS rather than having to go to SQL Server Management studio.

The execution plan show

the query gets executed on the server.

For example, at what point do you

do the join so that you can
see where it might be more optimal to do
join based on performance considerations.

You can
optimize the query based on the execution plan.

The client stats show it actually ran, for example, how
long it took, etc.

Column 1

Managing Database Quality & Change

For larger scale enterprise applications you often times want to take full advantage of SQL Server. Visual
Studio Premium/Ultimate

gives you the full SQL Server capabilities. If SQL supports it

you can do it in
premium and ultimate. This tier has the

support for the deeper needs of the enterprise

datawarehousing, etc. For example, the database developer gets to decide how items are physically
represented on disk in addition to the design of the database.

Data developers can perform schema compares
, data compares, data generation for unit testing, unit
testing, and T
SQL code analysis that will help to find bugs for you.

For those new to Visual Studio Database Projects, Database Projects provide

main pillars of


1.) Managing Change

Versioning of your database schema

Full fidelity offline schema development of your database
within the project system

Incremental Deployment

Deploy your database schema against multiple versions of a
database schema

& Data

Tooling to compare schemas between databases,
database projects, and dbschema (dbschema files which are the primary output artifact of
building a database project).

In addition, you will be told
what it would take to make one
look like the other
d you can be given
a script to make the target look like the source
and make updates from within the tool
Data compare

is also very important. With data
compare we
will compare data in 2 different database

and make the target look like the source

Schema Refactoring

Database Tooling to support common iterative changes to your database
schema within the development cycle (Refactoring operations include: Expand Wildcards,
Rename, Move Schema, Fully Expand Names

2.) Managing Qu

Dependency validation and visualization

Validation during design and build ensuring
consistency and integrity of your database schema

Static Code Analysis

Analogous to FxCop, but for SQL. The product has 14 rules that it ships
with. The inbox r
ules cover naming, design, and performance issue categories.

Unit Testing and Data Generation

A Unit Testing Framework similar to what is available to the
app developer, but with T
SQL and Data Generation capabilities to test your instantiated schema
with data.
n order for the unit test to be deterministic
ave to have the same data every

time. Finally with
database unit testing

can actually write unit tests using


unit test framew
rk and execute them against the da

write the unit tests in SQL.

Integrating database development into


Stored procedures and other database code, databases,
and application code are managed by Team Foundation Server (TFS).

3.) D
LC Integration into ALM

Team Developer brings the database development lifecycle into the Visual Stu
dio Developer
experience in a consistent and familiar manner. Many of the same idiom or metaphors now
exist for database development that c#/ developers are accustomed to. Database
projects bring the data tier developer into the team based developme
nt lifecycle. (Define,
Design, Develop, Test, Deploy, Manage) Lather, rinse, repeat

Build integration with MSBuild and TeamBuild

What we did in VS for database development is to follow the user model with what application
developers have had for many

We define our database schema as how we would like it to exist in the actual database.

Slide 8:

Column 2

Enhanced Tooling & Entity Framework


We have been making enhancement to the tooling experience when writing data access layer coding.

those of you who may not be familiar with the Entity Framework (EF), y
ou can think of

as your
ORM (object relational mapper)

Data Access Layer Tooling

With Visual Studio and the run
time capabilities in .NET developers are enabled to more effectively

applications running on SQL Server. Through the Entity Framework (EF), Visual Studio provides the
ability to more easily interact with your database through objects. Developers can now focus on doing
what they do best: solving problems through cod

Writing data layer access code in Visual Studio is now easier thanks to the added support for lazy load
and pluralization, better n
tier support and generated SQL improvements.

Let’s walk through these

Lazy Load

Lazy Loading is an obje
ct relational pattern which is

used to defer the inialization of an object until its
needed. The object will not contain all of the data, but it knows how to get all of them when they are
needed. Entity Framework now includes support for lazy loading. Wh
en you create a new model in
VS2010, entities that offer lazy loading are generated for you. Lazy loading, which is enabled by default,
doesn’t load each object returned by a query until you actually use it.
azy loading means that each post
isn’t loaded

until it’s actually used to print out the post’s PostName property
, for example
. The
interpretation that we on the Entity Framework team had about Lazy Loading was that on a given query
we would not "eagerly" load an entire graph (i.e. load a customer, t
heir orders, order lines and
products...) but instead would, by default, retrieve a shallow version of the queried instances.


In .NET 3.5 SP1 when you create a model from the database, you often end up with EntityType, EntitySet
and Navigati
onProperty names that are not quite right. Perhaps the most common example is
pluralized EntityType names. No one wants to create an instance of a ‘
’ entity, but .NET 3.5
SP1 if the underlying database table is called “
” that is exactly
what you get. Instead you
generally want the name to be singularized, so you can create a ‘
. In .NET 4.0 these sorts of
issues have been addressed. We’ve added simple pluralization services. The most obvious place that
this shows up is when you
are building a model from a database on the
Choose Your Database Objects

screen. There is now a checkbox which allows you to control whether pluralization / singularization
ld be attempted.

Better N
Tier Support

An n
tier design allows you to separate data, business logic, and interaction layers to ensure data
integrity and promote maintainability at each tier. The Entity Framework team received a number of
requests for imp
rovements on n
tier support. They’ve taken this feedback and made improvements to
the API to allow for n
tier design, as well as a code generation template that generates objects with built

in n
tier features, such as change tracking. The template genera
tes your entities as a set of CLR classes
with Windows Communication Foundation (WCF) serialization attributes in place so they can easily be
used in conjunction with WCF services.

Generated SQL Improvements

We’re constantly trying to improve the readabili
ty and performance of the SQL we generate. Numerous
simplifications and improvements of the SQL generated when querying using the Entity Framework have
been made in EF4. One such improvement is the removal of some extraneous joins. Another is the use

database wildcards for WHERE clause string parameters. For instance, the following LINQ query will
translate into a query that uses a WHERE clause with a LIKE statement and the ‘%’ wildcard to search for
all Blogs whose BlogName property begins with “Vis
ual Studio”:

var query = from b in ctx.Blogs

where b.BlogName.StartsWith("Visual Studio")

select b;

While these changes may seem small, improved SQL generation can result in queries that execute more
quickly and put

less load on your SQL Servers and network.

Enhanced Stored Procedure Support

Many databases contain stored procedures that perform custom SQL processing. Entity Framework
allows you to create a function in your entity model that calls a stored procedure
through the Function
Import feature. The Function Import feature can now detect the columns returned from a stored
procedure and create a corresponding complex type. In addition, existing complex types can be updated
when a stored procedure definition cha
nges. The Entity Framework Designer stored procedure wizard
will step you through the process of importing your stored procedures as functions.


EF4 now includes Plain Old CLR Object Support (POCO) support for entities. This offers better test

development and domain
driven design support by allowing you to have no EF dependencies for
your entities. EF will still track changes for POCO entities, allow lazy loading, and will automatically fix
up changes to navigation properties and foreign keys.

You can find out more about POCO support in the
walkthroughs posted on the
ADO.NET blog

Visual Studio has a robust extensibility model and the text template transformation toolkit is a great
example of that.

Text Template Transformation Toolkit Code Generation

In the first version of the Entity Framework, code generation didn’t allow for deep customization and
wasn’t integrated into Visual Studio. The Entity Framework now leverages Text Template
n Toolkit, or T4, which makes customizing code generation easy, flexible and powerful.

The experience is fully integrated into Visual Studio. Built
in code generation strategies can be chosen
by right clicking on the Entity Framework Designer surface and

selecting ‘Add Code Generation Item…’:

You aren’t limited to using the code generation strategies that ship in VS 2010; you can now
write your
own T4


or modify the default templates to provide your own code generation experience.

Code generation from within the entity designer has been enhanced to support the Text Template
Transformation Toolkit, or T4, which makes simplifies the code custom
ization process. You can create a
function in your entity model that calls a stored procedure through the Function Import feature. The
Function Import feature detects the columns returned from a stored procedure and creates a
corresponding complex type. I
n addition, existing complex types can be updated when a stored
procedure definition changes.

Slide 8: Column 3

Data Binding

Slide 12:
Data Binding

Data Binding Overview

Goal of Data Binding: B
eing able to drag and drop data sources

making it supe
r easy to create new
applications against your database.


Hard to integrate data in business logic with presentation layer while maintaining loose

What Is Data Binding?

Provide certain interfaces that the UI knows how to
listen for
Data binding is the process that
establishes a connection between the application UI and business logic. If the binding has the correct
settings and the data provides the proper notifications, then, when the data changes its value, the
s that are bound to the data reflect changes automatically. Data binding can also mean that if an
outer representation of the data in an element changes, then the underlying data can be automatically
updated to reflect the change. For example, if the user
edits the value in a

element, the
underlying data value is automatically updated to reflect that change.

Silverlight bindings
Web forms

data bindings

WPF and silverlight are the real benfactors of the data binding with data services

in this release

enabled really simple 2 way data binding for RIAs.

Data binding is now simple for RIAs. Can easil
y bind to SharePoint data now as we

More info as needed…


Windows Presentation Foundation (WPF) data binding provides a simple and consistent way for
applications to present and interact with data. Elements can be bound to data from a variety of data
sources in the form of common langua
ge runtime (CLR) objects and XML.
s such


s such as



have built
in functionality to enable flexible
styling of single data items or collections of data items. Sort, filter, and group views can

be generated on
top of the data.

The data binding functionality in WPF has several advantages over traditional models, including a broad
range of properties that inherently support data binding, flexible UI representation of data, and clean
separation of
business logic from UI.

This topic first discusses concepts fundamental to WPF data binding and then goes into the usage of the

class and other feature
s of data binding.

A typical use of data binding is to place server or local configuration data into forms or other UI controls.
In WPF, this concept is expanded to include the binding of a broad range of properties to a variety of
data sources. In WPF, de
pendency properties of elements can be bound to CLR objects (including
ADO.NET objects or objects associated with Web Services and Web properties) and XML data.

Drag and drop data binding was in WinForms in VS 2005 and is now available for WPF in VS 2010.

to this in WPF it was manual in XAML.

You have a single client making multiple concurrent requests to the server. The default behavior is that
these requests will be serialized; with the session less controller they execute in parallel.

For exam
the client might make a single long
running request, but the developer wants other shorter requests
(such as

requests) from the client not to be blocked.

Normally, these later requests would be
blocked fr
om running until the first request completes due to a lock being taken on the user’s

The goal of the APIs being introduced here is to allow the developer to specify on
a per
controller level what type of

will be required for the actions within that controller

Slide 9: Title Slide

Web & Cloud
> Simple, Powerful


Column 1

Tier Applications (DAC)

Let’s first take a look at departmental applications in the context of the data
tier applications (DAC).

important to note the DAC capabilities are for both on
premises and the cloud.


There are a number of pain
points that existed around T
SQL development for departmental

Apps growing faster than DBAs

SQL server instances are growing too

Need to have an enterprise wide view of SQL health (server instance & application)


It is difficult to do resour
ce planning

Difficult to do deployment

Need to package apps with databases

Need to record deployment intent

Handing off T
SQL scripts is error prone

Upgrades are challenging

DAC Overview:

The DAC is a great way to resolve these issues.
A data
application (DAC) is an entity that contains all
of the database and instance objects used by an application. A DAC provides a single unit for authoring,
deploying, and managing the data
tier objects instead of having to manage them separately. A DAC
s tighter integration of data
tier development with the development of the associated application
code. It also gives administrators an application level view of resource usage in their systems.


With the new Data
tier Application project in Visual
Studio data developers are able to package the full
data application components (DAC) into a single file that will be deployed through VS or by a DBA.

Developers can focus on writing apps rather than being constrained or throttled by physical layer

Can reverse engineer the T

incorporate it into Visual studio and repackage your
app into a dacpac “bring it under management”

Developers can now specify what the deployment should be within Visual Studio based on
characteristics of the a
pplication that they are building.

They can deploy within Visual Studio or they
can provide a dacpac, which has all of the required deployment information to the DBA and then the
DBA can deploy it.

Declarative model for developers

they don’t have to g
et into the details on alter or upgrade of the
database because of the abstraction from physical layer considerations like partitioning of drives and
deciding which operating system SQL server can run on.

Another important item to look at is the upgrade
process where we have made significant
What happens when you want to move to a new version of your database applications
It is typically
very difficult to migrate to the new schema
because there are
new table structures to take
of and
typically a dba has to handle that
. B
ut now we are handling that in the
pgrade your dacpac and it will upgrade the table structures and

single unit of
ent for on
prem and the cloud. SQL S
erver handles the upgrade
of the scripts for you

From On
Premises to the Cloud
Change the connection string for your DAC to use Visual Studio to
develop cloud applications.

You can work with
tier application projects

for SQL Server or SQL
Azure from Visual Studio, providi
ng your developers with a flexible development model.

Tier application elements


Application properties (appname, version)

Database object definitions

level object definitions (logins, collations, compatability level)

selection policy

Files & Scripts (pre/post deployment scripts)

.dacpac is a .zip file


Contains multiple XML files


Does not contain any user data

For initial deployment

Supply the name of the instance for deployment

Server selection policy is checked

Database and objects

are created


One data file (3MB and 1MB unlimited growth)


One log file (1MB and 10% growth limited to 2TB)


Recovery model taken from instance defaults

Important to note that this is for departmental apps as you can’t specify the size of these items

optimized for those larger enterprise applications where you need to be concerned about those sizing

Slide 11: Column 2


Customers have asked us for more options to use existing code investments for data access layer code,
more ways to integrate data into their applications and more ways to share those data with other
applications in a secure and scalable fashion.

g Your Data
: OData

: T
he way that data interfaces are structure
d is very application specific, which

makes it

difficult to share data across applications
, while m

security of the data and at the same time

loose coupling betw
een the applications.

By creating Open Data Protocol (OData) enabled
data services, developers can build easily queryable and updateable services that can run through
business logic to maintain the same level of security and control that was there previou

Let’s take a look at exactly what OData is and how it can unlock your data so that you can capture more
value and create compelling applications.

Before delving into OData, let’s set a little historical context. OData represents the continuation o
f a
trend that has been in play for over 10 years.

First, there are obviously a number of
ways to share data
. Two that come to mind are


Files. But,
most of the time are impractical for the integration of online applications



Libraries. C


them are great so long as you have the library for your environment

e.g. windows or linux

managed or native.

With the emergence of the web there was a need to look at new ways to share and aggregate data.

RSS (Really Simple Syndication) was created
to ease the syndication and aggregation of content on the
web. Then ATOM came along to address problems including confusion over XML element definition. Of
course, within ATOM you now had ATOMPub which enabled you to easily publish these data feeds over
HTTP. ATOM and ATOMPub gave us a foundation from which to build additional capabilities to ease the
exchange of data over the web. OData extends ATOMPub to provide the following capabilities:

OData adds the following to AtomPub:

A convention for repres
enting structured data

A resource addressing scheme and URL syntax

A set of common query options (filter, sort, etc.)

Schema describing resource structure, links and metadata

Payload formats
and semantics for batch and “unit of work” requests

nate representations of resource content (JSON)

With these extensions to ATOMPub developers can now quickly service enable their data and write
applications that consume data from a variety of OData enabled services.

Let’s dig into the background data on

what OData is.

OData is an HTTP centric protocol, just like ATOMPub, that is

to exchange data over the web
Protocols are great because they are
very simple and stateless


they are text based
. The goal was to
not invent new items when they weren’t needed because there is a lot of existing infrastructure in place
that works very well with HTTP including
proxies, firewalls,

and security

. A

goal of
OData is to
a low barrie
r of e
ntry. In other words,
“do the minimum possible to get to interoperability”

A core tenet in the development of OData was that it had to be RESTful. In other words, OData

to be scalable, decoupled and layered. In fact, the web itself is a canonical example of REST.

a protocol,
is not a standard and is

not a format
. REST is an architectural style;
a way of thin
about distributed systems.
In a RES
Tful system the concept of resources is very important
. D

organized in terms of resources and resources represent the state of each one of the entities in the
stem that you are modeling and e
very resource in the system has an address
has a uniform
interface constraining actions to a
certain set of pre
agreed items. It is very important to note that a
ata service is not a gateway to the database
You will still be interacting through business logic

Making it easy to inspect your
data was critical.
Opaque URLs are great in HTTP

they help to
reduce coupling
. As a result, we have made it very easy to
ype an address and then get your requested
because we are
largely based on ATOMPub
. In addition, we have also inc
luded JSON support
for JavaScript purposes to enable AJAX web programming, due to the increasing need of applications to
send data to the client without refreshing the client browser.


Let’s now take a look at an example of what it would take to OData
enable your data service.

Just like in ATOMPub there are certain items that OData is looking for. At the very minimum, you need
to have the concept of rows. You can use OData just as a way of representing a bunch of rows, typically
available on an HTTP
end point and retrievable using HTTP GET, although a file on disk could work as
well so long as content type negotiations were handled. A set of rows in OData is represented using an
Atom feed, where each entry has the usual metadata plus structured data
encoded as name/value pairs
inside content. Beyond rows you can add the additional behavior depending on the capabilities of your
end system, what clients you want to enable, and the requirements of your particular scenario.

These additional OData c
apabilities include:

You can choose to support simple query options such as filter ($filter), sorting ($orderby), paging
($top, $skip), projections ($select) and inline expansion ($expand).

What do you get?

Developers using
the Data Services clie
nt for .NET would be able to use LINQ against your service, at least for the
operators that map to the query options you implemented.

Service Documents
: You can choose to add a service document to collect links to multiple collections
that make up a data

What do you get?

With this clients can discover what collections of data your
service exposes. Some clients such as PowerPivot will use this to let the user browse data sets.

: You can choose to add rich metadata describing the collections,

shapes of resources in each
collections, and their associations through the Entity Data Model.
What do you get?

If you have
metadata then Visual Studio can automatically generate .NET classes for your service types, and a
service entry point with propert
ies for each of your top
level sets. The OData visualizer also can allow
developers to view and understand the structure of your data service.


You can choose to support updates through HTTP POST/PUT/DELETE verbs. Most services that
allow updates d
on't allow anonymous access so I can't easily share an example, but it would be POSTing
or PUTing the same payload you GET. For DELETE only the URL is needed. For all update operations you
can use ETags for optimistic concurrency control if that's required

for your scenarios.
What you get?

Clients that support updates such as the .NET, AJAX, Silverlight, PHP and Java ones would be able to
submit changes to the server.

: More sophisticated features in OData include batching and changesets for round
optimization and unit
work support.
What you get?

Clients that support batching (most of them do)
would get not only optimizations for multiple queries, but the ability to save a whole unit of work (a set
of changes that are applied atomically).

Let’s now take a look at what it would take to create a server
side OData enabled service in Visual Studio
and .NET.

Building an OData server:

WCF Data services is the framework to create endpoints and services
in .NET

for OData.



build a servi
ce in .NET

Add new project item

Data Service (WCF Data service). T
he template sets up
a service
. Y
have to tell it where to get th

data from
. You
provide it an entity
generated class and then it will be able to pull t
he data from there

, you can
insert a query interceptor
. This
allows you to interact through a standard

get, put, post
. T
he interceptor allows you to catch things as they happen and
then customize the behavior

controlling the state as
the data comes into or out of
the system

you can
write code within the query interceptor to determine if it should be
included in a returned data set
. F
or example
you can specify to only show

those items
that have

been flagged as public.

g these steps will give you what is sometimes called an OData


Let’s take a quick look at SharePoint as well.


m and online
: E
oint 2010 site is an open data server out of the
. Y
ou just need to install

data se
rvices ( data services) runtime on it
You can
now return
oint lists through

with a trivial client. For example with a JSON


can access data from
. Previously
all that data was locked down in
oint and
now you can get it and make the most of it.
ou are inserting data into the
oint system
not directly to the database
. W
hen you insert data, for example,
nt will know the user
that entered it and if there

user constraints or oth
er security mechanisms
in place
then you
are not allowed to enter data into the system.

OData servers include:

SQL Reporting services

Azure table storage

SQL Azure


Media room

Open Government data initiative


All you need is an http stack, meaning you could query the data service using the OData query
capabilities from within a browser simply by typing URLs. T
he only open question is how much more
complexity do you want to add with your implementation
In oth
er words how much code do you want
to write or do you want us to write? Higher up in the stack you can write less code and vice versa.


have provided a number of client libraries to facilitate the OData enablement of your data services.
Let’s take a l
ook at the capabilities within Visual Studio this time from the client perspective.

Visual Studio

It knows

For example, you can h
ave a WPF client within VS
it could a winforms or silverlight
client as well
. You then click
add service

reference and give an open data url to any
open data server that exposes metadata or
can just use the client without code
. For example,
put in
a S
oint server URL and it discovers metadata
about my
oint server

t will then go fe
tch the metadata and generate code

you have a client with all types for each one of the resource types
. Y
ou have a

experience” which is very familiar to a Visual Studio developer.

Can then write code to access the
oint lists in your
client, for example.

Also have full support for LINQ queries

Can view a service reference (visualize) and say view in diagram which will show you a
view of your metadata in a diagram

exposes the schema of the sys
tem visually using
the metadata. You
can see the entity types and resource types and their relationships

OData does a s
imple translation of URLs
. There isn’t
a lot of magic going on under the
This enables Visual studio to work with O
ata servers of any kind

Other out
OData clients include:

Excel + Powerpivot


have toolkits


have toolkits


have toolkits


have toolkits

open source


have toolkits

open source


party Adoption of Odata

Websphere extreme scale


Telerik open acc



Supporting unstructured data

Sharepoint has the need for this
. You don’t want to create a separate interface for
document libraries
AtomPub has first
class support for th
is through “media
link entries”.
If you are
using ou
r runtime there are interfaces that you can implement to expose your blobs.

Query Options


you get to vertically slice the result set

allows you to get more or less data depending on
your needs

just write sql on the client and find this o
ut as the linq translator in Odata will handle for

Row count

the number of rows that have been returned

can do it all at once and not have to do it

Inline count

Server driven paging

say how many rows you get and once you hit that limit you get a series of
continuation URLs that allow you to dig into the subsequent results

part of the atom standard

Excel Powerpivot

Powerpivot lets an information worker do BI visualization of data
from all sources.

Powerpivot can load from feeds which are going through the business logic to ensure that you are
returning meaningful information.

Can also correlate data from sharepoint

which can be used with your application data to do a “data
hup” within Excel.

SQL Reporting Services

All computed data in reports is now exposed as an Odata field and can be exported as a feed to be
consumed by Excel, etc. Can use reporting services as a way to expose data.

Slide 11: Column 3

Dynamic Data


Hard to quickly build data
driven web apps with flexible templates

It isn’t necessarily easy to build data
driven websites quickly with templates that give you the speed and
the level of control and customization that you need. Dynamic Data sol
ves a lot of the problems
associated with building data
centric websites.

ASP.NET Dynamic Data is a framework that lets you create extensible data
driven Web applications
easily. It does this by automatically discovering database schema information at r
un time and deriving UI
behavior from it. A scaffolding framework provides a functional Web site for viewing and editing data.

You can easily customize this framework by changing elements or creating new ones to override the
default behavior. Existing appl
ications can easily integrate scaffolding elements by using ASP.NET pages.

You can use ASP.NET Dynamic Data to create new data
driven Web sites that require minimal code and
that take full advantage of the capabilities of the Dynamic Data framework. You c
an also select specific
Dynamic Data capabilities to add to existing Web sites.

Dynamic Data offers the following

Web scaffolding that can run a Web application that is based on reading the underlying
database schema. Dynamic Data scaffolding ca
n generate a standard UI from the data model.

Full data access operations (create, update, remove, display), relational operations, and data

Automatic support for foreign
key relationships. Dynamic Data detects relationships between
tables and

creates UI that makes it easy for users to view data from related tables. For more
information, see
Walkthrough: Creating a New ASP.NET Dynamic Data Web Site Using

The abi
lity to customize the UI that is rendered to display and edit specific data fields. For more
information, see
How to: Customize ASP.NET Dynamic Data Default Field Templates

How to:
Customize Data Field Appearance and Behavior in the Data Model

The ability to customize the UI that is rendered to display and edit data fields for a specific table.
For more information, see
How to: Customize the Layout of an Individual Table By Us
ing a
Custom Page Template

The ability to customize data field validation. This lets you keep business logic in the data layer
without involving the presentation layer. For more information, see
How to: Customize Data
Field Validation in the Data Model

Support for table relationships. Dynamic Data detects relationships between tables and creates
a UI that enables the user to view data from related tables. Dynamic Data suppports display
and editing data in a many
many relationship. For more information, see Walkthrough:
Displaying and Editing Data with a Many
Many Relationship. Also, Dynamic Data detects the
inherited fields in the data model and provides scaffolding for the inh
erited tables. For more
information, see
Walkthrough: Mapping Table
Hierarchy Inheritance

Slide 12: Title Slide


Better Insight

Column 1

SQL Server


More and more data are being captured through a variety of mechanisms, but it is still difficult to write
applications to take advantage of that data. With Visual Studio and SQL Server StreamInsight you can
build applications that query larg
e data streams in near real
time to create meaningful insights for your
business. Make better informed decisions when you need to.


You need to be able to respond to events as they occur

Wish I could trade as soon as the event occurred

I could provide the right the content and the ads based on what the user is looking at now
on my web page

Wish I knew exactly when the breakeven happen

Wish I knew as soon as it crossed the critical limit

Find that one event

There are so many events comi
ng. How do I identify the events that I care about

Combine events from different sources

Correlate events from different event sources to derive a more holistic view

StreamInsight provides the ability to process large volumes of events across multiple da
ta streams,
supports Visual Studio, .NET, and LINQ, and can integrate with other platforms using adapters.

StreamInsight specializes in low latency and trigger response actions.

There is a LINQ provider for StreamInsight

Define your questions using LI
NQ, the data comes to LINQ
and then you get answers out of those. You get the answers through a pluggable and extensible adapter
framework. A typical LINQ scenario is the data sits in one place. You ask a question and it comes back

the data are not ch
anging that much. StreamInsight is much more temporal.

We are capturing data from a variety of mechanisms: sensors, operational data (it ops

app analytics, in
time in aggregate

web analytics). For example, in the web analytics space you can fi
nd out: Who
is watching what, for how long. You can answer those with traditional relational tools, but can’t do it
fast enough. StreamInsight enables developer to build apps that will expedite the decision process.

Example of question: “in the last 2
minutes how many times was this pattern (http status code that
wasn’t 200) observed”. If I see more than 5 of those in a minute

that is a business relevant event, an
error in this example and then you can create error thresholds that trigger an action a
nd send it to
someone who cares.

Stringing together of the events takes place in C# or any .NET language or through an API call to the
StreamInsight engine. You could use powershell, but most likely it will be in VS.

Here are some of the highlights:


rformance and Data throughput

Highly parallel execution platform

memory caches

Incremental result computation

All processing triggered by incoming events


Avoids polling overhead

Can handle out
order events

can present the data into a time
order sequence in an
automated way

Can incorporate static reference or historical data

Deployment and Management

Multiple deployment scenarios


Fully integrated via embedded DLL


ne server (multiple apps and users)

Monitoring and management built


Management interface


Diagnostic views


Manageability framework allows for remote monitoring


alone event flow debugger


Can take an event and see what that
affected downs
tream, also able to do a root cause analysis starting with downstream
and working back up

Slide 1
Column 2

Reporting Services

You can embed reports directly into your applications and customize the look and feel of those
reports to meet your business n
eeds, which can greatly contribute to your application’s

You can build
better experiences for reports embedded in applications using the Visual
Studio report viewer control for winforms and web forms.

First, here is a quick primer on reporting modes.

Local processing mode

refers to report processing that is performed by the ReportViewer
control in the client application. All report processing is performed as a local process using data
that your applic
ation provides. To create the reports used in local processing mode, you use the
Report project template in Visual Studio. For more information, see

for Local Processing

Remote processing mode

refers to report processing that is performed by a SQL Serv
er 2008
Reporting Services report server. In remote processing mode, the ReportViewer control is used
as a viewer to display a predefined report that is already published on a Reporting Services
report server. All processing from data retrieval to report r
endering is performed on the report
server. To use remote processing mode, you must have a licensed copy of SQL Server 2008. For
more information, see

for Remote Processing



deliver both local and server mode

Local mode usually r

one version

mode for what

kinds of reports you can build. For example, in R2 (KJ)
local mode
won’t have maps.

The report v
iewer control
in Visual Studio
provide hooks to customize the look and feel of the
, parameterize items and

can turn

off and on. AJAX
support represents a great
improvement because the report experience is much more fluid. For example, can have
like in a
master detail report
. C
an have a
of items
and just click the + sign (toggle) to
see the

master records
click on the + sign (toggle)
to see more data. In the past,
every time you did a
toggle it
would refresh the whole report, but
now it just refreshes the area that needs to get
updated. More
much more
fluid and snappy.

You can also se
e that we now have broader browser support enabling your application to be
more easily consumed.

Slide 14: Column

PowerPivot for Excel


gives users the power to create compelling self
service BI solutions, facilitates sharing and
collaboration on user
generated BI solutions in a Microsoft SharePoint Server 2010 environment, and
enables IT organizations to increase operational efficiencies

through Microsoft SQL Server 2008 R2
based management tools.

Using a Reporting Services report as a data feed

In SQL Server 2008 R2, every single Reporting Services report can be used as a data feed for your Gemini
models! This means that users can get
refreshable data feeds sourced from all the places where SSRS
can get data, without any further work required to make data available for Gemini.

Once you have found a useful report, you can add it to your Gemini model by clicking the new orange
“Export to
Data Feed” button on the toolbar. This will launch Excel and take you straight into the import
wizard. If you already have Excel workbooks open, you will be prompted to either pick an open model to
add the feed to, or create a brand new model.

Using a data

feed from the Internet

Reporting Services is just one of the many places where you can get data feeds. You can also use data
feeds from the Internet or other sources, as long as they follow the REST and Atom model (see next
section for technical details o
n developing a custom data feed). One really cool place for getting data is
the Open Government Data Initiative (OGDI) at

which is built on the Azure
Services Platform.

Developing a
custom data feed

If you are a developer, you are probably already wondering how you can enable Gemini users to get
data feeds from your applications. The good news is that there are lots of easy ways to support this. The
basic technology is REST and Atom,

so if you already have web services support you are in good shape to
get started. One of the simplest ways to build a data feed is to use the great support for
syndication in

. You c
an add your custom columns to the feed using the extensibility mechanisms that they supply,
see one of the OGDI feeds mentioned above for an example, such as

(you may need to turn off feed reader view
in your browser to be able to see the raw XML).

Slide 1
: Tying it all Together

You need a platform that will enable you to build a variety of applications from departmental to
ise scale that will run on
premises and in the cloud. You also need a way to integrate you data
development efforts into your overall software development lifecycle (SDLC) or application
management lifecycle (ALM) process. Let’s take a look at how we are

enabling this in Visual Studio and
SQL Server.

Slide 1
: Visual Studio & Data Development

To recap, you have seen how
we are making data development easier across the three tiers
with tooling,
and increased flexibility.



Bringing the tooling experience for data developers closer to that of managed code
developers is an area in which we have made significant investments. We have made numerous
investments in Visual Studio that have made T
SQL development a core integrated

part of the
IDE. In addition, you need the tooling and data platform capabilities that span departmental
and enterprise applications. Finally, building applications that process large volumes of data in
near real
time is a critical success factor in a
number of industries and businesses today. With
SQL Server StreamInsight, we are providing the data platform capabilities that enable you use
the tools you know, Visual Studio and .NET to build impactful applications that will lead to better
decision maki
ng capabilities.


We have made numerous improvements in entity framework (EF) to enhance the
data development experience when writing data access layer code. We have also made the
customization of code generation a lot simpler with th
e Text Template Transformation Toolkit.
Supporting rapid data
centric application development is imperative and we have made
investments in data binding to enable paradigm for developers.



The presentation layer is where you application me
ets the customer. From
websites to reporting we have enhanced Visual Studio to allow you to more quickly build out the
presentation layer. We have provided the ability to rapidly build websites with Dynamic Data
containing flexible templates and scaffoldi
ng that provide rich data access and table support. In
addition, there have been improvements in the report view controller to provide a smoother
end user experience.