Download Optional - Microsoft Server and Cloud Partner Resources

Document related concepts

Microsoft Access wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Database wikipedia , lookup

SQL wikipedia , lookup

PL/SQL wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Relational model wikipedia , lookup

Clusterpoint wikipedia , lookup

Database model wikipedia , lookup

Transcript
•
•
•
A primary goal of PaaS is to remove the need to manage the underlying virtual machine.
This allows customers to focus on the real value of the application, which is the
functionality that it provides, not the underlying operating system or virtual machine.
Limits
Manage
Deploy
Understand Tradeoffs and decision points on the following:
•
•
•
•
Upgrade Domains
Deployment Slots
Web Deploy
Continuous Integration
Understand the methods of monitoring PaaS workloads:
•
•
•
•
IIS Logs
Azure Diagnostics
IIS Failed Request Logs
Windows Event Logs
•
•
•
•
Performance Counters
Crash Dumps
Custom Error Logs
.NET EventSource
•
•
Manifest based ETW
Application Insights
• Auto Scaling: The application environment does not automatically increase or
decrease role instances for increase or decrease in load
• Load Balancing: Application instances not load balanced by default
• Density: Service Management total cloud services per subscription is 20
• This cloud service model follows the traditional client application deployment
•
•
•
•
model such as the appx format by modern Windows Applications.
The target and core idea is to build, version, and deploy the service package unit
(cloud service).
Thus making it easier for the DevOps or release management team to deploy
updates to the cloud service application, as well as, rollback unforeseen side
effects from an application update.
Scaling cloud services in the PaaS model is trivial, as the application and service
definition are both wrapped in a package.
Deploying more instances is simply a matter of telling the Azure fabric how many
instances you want.
Mandatory
Recommended
Optional
• Place all assets in cloud services, including code
and other installations required to run the
application. Everything must be included in the
cloud service package and scripts installation.
• Consider the deployment models that will
be given when updating the application.
There exist a few options, so it’s important
to understand the pros and cons
• Consider the ability of multiple running
deployment models for your cloud service. It’s
possible to have multiple deployments in the form
of production, test, and staging.
Application Type
Web Role
Worker Role
Description
This role is used primarily to host and support applications that target IIS
and ASP.Net. The role will be provisioned out of the box with IIS installed
and can be used to host front end web based applications.
This role is used primarily to host and support service type applications.
These applications target backend processing workloads. These can be
long running processes and can be thought of as Windows Services in the
cloud.
Web Roles
• Tailored for IIS based
applications.
• Configure the scale unity
for the instance and
ensure that multiple (at
least 2) are used for
production workloads.
Performed by setting the
configuration in the
service definition.
Worker Roles
• Tailored for service type
applications (non-web based).
• As such, error handling that
would be required for a “lights
out” service type application
should be employed.
• If the exceptions are not
handled in the service inner
loop, the role instance will be
restarted, causing downtime
for processing.
Mandatory
Recommended
Optional
• Decide on the amount of instances for web and
worker roles. Web and worker roles require at least
two instances to provide fault tolerance for the
automatic maintenance nature of PaaS
• Consider web and/or worker roles if the
application requires installation of binaries
on the web or application server.
• Understand that Virtual Networking is common
to allow the communication needed for
databases, management services, and other
services but it’s not a hard requirement for
deploying web or worker role applications.
• Azure Batch is comprised of
two primary components.
• Azure Batch and Azure
Batch Apps.
• Azure batch APIs focus on
the ability to create pools of
virtual machines and define
work artifacts (work items)
that will be scheduled for
processing.
Mandatory
Recommended
Optional
• Define pool of VMs that will perform
the underlying work for Azure Batch
Jobs
• Analyze the workload to
determine which model is the
better fit, Azure Batch or Azure
Batch Apps.
• Leverage the REST API to output
monitoring and telemetry to existing
systems.
Azure Batch: Create pools of
VMs and define work
artifacts (work items) that will
be schedule for processing.
Azure Batch brings with it an
API to support the
infrastructure. No need to
manually build servers and
software libraries to handle
job scheduling.
Azure Batch Apps: Azure
Batch Apps take Azure Batch
a step further. The goal is to
publish an “app” that is
essentially a service that will
allow data to be fed to it,
and it will run as needed.
New ability to monitor the
jobs through the portal and
REST APIs is provided.
Mandatory
Recommended
• You must self-host your application/services.
For a Web API, this typically means selfhosting in something like OWIN.
• When using partitions, spend extra time
understanding the most appropriate way
to evenly disperse the data across the
partitions, then choose an appropriate
partitioning key.
• Try it out for free using Party Clusters
Optional
Mandatory
Recommended
Optional
• Determine a strategy for on-premises,
cloud, and hybrid clustering for HPC
workloads
• Scale the application resources
dynamically, to take advantage of
extreme size virtual machines
only when it makes sense
• Make changes to the applications to allow
disconnecting the tiers to take advantage of
features, such as queuing to allow scaling of
independent compute clusters
Previously another type of PaaS application were Azure Websites.
These have now been integrated into a new model that is called Azure App Services.
App Services are comprised of the following subcomponents:
•
•
•
•
Web Apps
Mobile Apps
API Apps
Logic Apps
• New service definition for previously known Azure Websites.
• The websites model is based around the further decoupling from the underlying
infrastructure of the traditional PaaS applications.
• This model removes the customer from any connection with the underlying VM that
is hosting the application. No RDP.
• Hence, installation of components and software is only done through the Azure
Portal (MarketPlace), which essentially packages the software to deploy to the
website.
• Websites allow for the following scenarios and deployment model
Provide a platform to
host web applications
and web services.
Run backend type
processes via a service
offering in WebJobs.
1. Manual: file copy, ftp, and
Web Matrix
2. Local Git: via Kudo
environment
3. Continuous Integration: Git
or TFVC
• App services is one of the latest models to be employed on Azure.
• Focused on the idea to simplify the management and cost of running a
variety of services in PaaS.
• A service performance level can be set at the app service level and then the
various services can be deployed inside this app service.
• Hence, a web app could be deployed that is using an API app or Logic app
and the cost and performance levels are set at the app service level.
• A model that simplifies the deployment because each app doesn’t need to
be configured and billed separately.
Mandatory
Recommended
Optional
• Understand that these lighter PaaS services do
not allow direct access to the underlying virtual
machines. No installation of components on
the underlying web server (outside of the
application folder).
• Match the service offering with the type of
workload. API apps differ from Web apps as
one needs more focus on backend, while
the other one on the front end.
• Plan for capacity needs. Although some thought
should be given to how many instances or sizes
should be used, these can easily be changed later.
The focus here is on rapid deployment.
Azure Mobile Engagement is a software-as-a-service (SaaS) user-engagement platform that provides data-driven
insights into app usage, real-time user segmentation, and enables contextually-aware push notifications and in-app
messaging. It enables you to create and manage a targeted engagement strategy for your mobile applications.
1.Contextually-aware push notifications and in-app
messaging
This is the core focus of the product - perform targeted
and personalized push notifications. And for this to
happen, we collect rich behavioral analytics data.
3. Real-time user segmentation
Once you have collected app users' behavioral analytics
data, we allow you to segment your audience based on
various parameters and collected data to enable you to
run targeted push campaigns.
2. Data-driven insights into app usage
We provide cross platform SDKs to collect the behavioral
analytics about the app users. Note the term behavioral
analytics (as against performance analytics) because we
focus on how the app users are using the app. We do
collect basic performance analytics data about errors,
crashes etc but that is not the core focus of the product.
4. Software-as-a-service (SaaS):
We have a portal separate from the Azure management
portal which is optimized to interact and view rich
behavioral analytics about the app users and run
marketing push campaigns. The product is geared to get
you going in no time!
Mandatory
Recommended
Optional
• Start with a well-designed engagement plan to
help you identify the granular data you will
need to be able to segment your user base
• Use an iterative, step-by-step approach to
defining your engagement plan.
• Use the BET (Business, Engagement, Technical)
model to define your key performance indicators
Use Azure’s API Management to centralize all framework for your company’s deployed web services.
The developer is tasked with concentrating on business logic instead rather than “infrastructure” type
code.
Centralize your frameworks in Azure’s API management service to allow the underlying web services to
be deployed on different servers and even different technologies.
Ultimately, consolidating services from multiple back-ends to a single entry point for service
consumers.
Mandatory
• Configure policies for services and
profiles for existing web services to use
API management.
Recommended
• Protect web services with API rate
limits and quota policies.
Optional
• Customize the developer portal to
allow for developer registration and
subscription models.
Azure App Service is a fully managed PaaS (Platform as a Service) for developers that makes it easier to build web,
mobile, and integration apps. Logic Apps are a part of this suite and allow any technical user or developer to
automate business process execution and workflow using an easy-to-use visual designer.
Best of all, Logic Apps can be combined with built-in Managed APIs to help solve tricky integration scenarios with
ease.
•Easy to use design tools - Logic Apps can be designed end-to- Apps.
end in the browser. Start with a trigger - from a simple schedule to •Extensibility baked-in - Don't see the API you need? Logic Apps
whenever a tweet appears about your company. Then orchestrate is designed to work with API apps; you can easily create your own
any number of actions using the rich gallery of connectors.
API app to use as a custom API. Build a new app just for you, or
•Compose SaaS easily - Even composition tasks that are easy to share and monetize in the marketplace.
describe are difficult to implement in code. Logic Apps make it a •Real integration horsepower - Start easy and grow as you need.
cinch to connect disparate systems. Want to create a task in your Logic Apps can easily leverage the power of BizTalk, Microsoft's
CRM software that is based on the activity from your Facebook or industry leading integration solution to enable integration
Twitter accounts? Want to connect your cloud marketing solution professionals to build the solutions they need. Find out more
to your on-premises billing system? Logic apps are the fastest,
about the BizTalk capabilities provided with Logic Apps.
most reliable way to deliver solutions to these problems.
•Get started quickly from templates - To help you get started
we've provided a gallery of templates that allow you to rapidly
create some common solutions. From advanced BizTalk solutions
to simple SaaS connectivity, and even a few that are just 'for fun' the gallery is the fastest way to understand the power of Logic
Mandatory
Recommended
Optional
• Create conditional logic (a trigger) and
an associated action (e.g. email me)
• Add conditional logic to your logic app
(e.g. email me after I’m followed 10
times on twitter).
• Use the code view to edit your logic
app
At a high level, all platform notification systems
follow the same pattern:
1.The client app contacts the PNS to retrieve its
handle. The handle type depends on the system. For
WNS, it is a URI or "notification channel." For APNS,
it is a token.
2.The client app stores this handle in the app backend for later usage. For WNS, the back-end is
typically a cloud service. For Apple, the system is
called a provider.
3.To send a push notification, the app back-end
contacts the PNS using the handle to target a
specific client app instance.
4.The PNS forwards the notification to the device
specified by the handle.
Mandatory
Recommended
Optional
• Separate production and development environments into
different hubs. The free tier lends itself to development
purposes. If you require the ability to export
telemetry/registration information or have multi-tenancy,
use the standard tier.
• For sending sensitive information, follow a secure
push pattern, where a notification, with a message
identifier but without sensitive data, is sent to the
device and the device retrieves the sensitive data.
• Use tags to target specific registrations (i.e. devices,
users, etc.)
Azure SQL Database
Fully managed SQL database service
•
Built for SaaS and Enterprise applications
•
Predictable performance & Pricing
•
Elastic database pool for unpredictable SaaS workloads
•
99.99% availability built-in
•
Geo-replication and restore services for data protection
•
Secure and compliant for your sensitive data
•
Fully compatible with SQL Server 2014 databases
Reads
Writes
Compute
Memory
Basic, Standard, and Premium provide increasing performance levels
• Scale individual databases up/down via portal, PS, APIs, or T-SQL
to reflect actual or anticipated demand
• Database remains online while scaling
P3
• Hourly billing provides cost efficiency
•
P2
B S0
S1
S2
S3
P1
Database Service Tiers
•
••
•••
Geo-Restore
Geo-Redundant
Backups
RPO < 1 hour
Recovery Time
Minutes to Hours
Geo-Replication
Asynchronous
Replication
RPO < 5 seconds
Recovery Time
< 30 seconds
Point in time restore
Continuous backup
Restore to anypoint
Recovery Time
Minutes to Hours
Accidental Database
deletion
Tail-end backup
Restore to point of
deletion
Recovery Time
Minutes to Hours
Azure SQL has a
public facing IP
accessible by anyone.
Secure the
connection to Azure
SQL using a SQL
Server firewall and
SQL Database
firewall.
When using
ExpressRoute and
public peering, the
NAT address
interface on the
Azure end has to be
specified in the Azure
SQL firewall rules.
This affords the
benefit of not
having to maintain
local backups.
Consider adding a
caching layer for
TPS loads.
Advances features
such as disk level or
OS level access
does not work with
Azure SQL.
Azure SQL does not
allow CLR
integration, backup
sets, and
FILESTREAM tables.
Features
However, TDE is not
supported by Azure
SQL. Consider SQL on
IaaS for TDE
scenarios.
Azure SQL
performance can
appear to be
slower, as each
write will be
committed to three
databases in the
local datacenter
(synchronously)
and another write
(asynchronously) if
geo replication
enabled.
Performance
Security
Data can be
encrypted by the
application and
stored in an Azure
SQL database.
FILESTREAM
Store the file object to Azure
blob storage and store
indexes in SQL
FILESTREAM is not available
wit Azure SQL databases
SQL Backups
Use geo-replication and point
in time backups for Azure SQL
workloads
Traditional Backup sets are
not supported in Azure SQL
databases
Mandatory
• Understand the performance and
management differences between
traditional SQL database and Azure SQL
database. Decide which option suits your
needs the best.
Recommended
• Analyze databases to be migrated to
Azure SQL database for incompatibles
that might be present (E.G.
FILESTREAM).
Optional
• Leverage built in tooling for BACPAC and
DACPAC to move databases to Azure SQL
databases.
DocumentDB is a true schema-free NoSQL document
database service designed for modern mobile and web
applications. DocumentDB delivers consistently fast
reads and writes, schema flexibility, and the ability to
easily scale a database up and down on demand. It
does not assume or require any schema for the JSON
documents it indexes. By default, it automatically
indexes all the documents in the database and does not
expect or require any schema or creation of secondary
indices. DocumentDB enables complex ad hoc queries
using a SQL language, supports well defined
consistency levels, and offers JavaScript language
integrated, multi-document transaction processing
using the familiar programming model of stored
procedures, triggers, and UDFs.
As a JSON database, DocumentDB natively supports
JSON documents enabling easy iteration of application
schema. It embraces the ubiquity of JSON and
JavaScript, eliminating mismatch between application
defined objects and database schema. Deep integration
of JavaScript also allows developers to execute
application logic efficiently and directly - within the
database engine in a database transaction.
All resources within DocumentDB are modeled and
stored as JSON documents. Resources are managed
as items, which are JSON documents containing
metadata, and as feeds which are collections of
items. Sets of items are contained within their
respective feeds.
The image to the right shows the relationships
between the DocumentDB resources
Mandatory
• Resources within DocumentDB are stored
as JSON documents, typical of a
document-based NoSQL database. Make
sure this type of NoSQL database best fits
your application’s data requirements.
Recommended
• Understand the four different consistency
levels of DocumentDB and how each would
impact your application’s consistency,
availability, and performance.
Optional
• Use data partitioning to scale-out
DocumentDB.
•
•
•
•
•
•
•
•
•
•
Low latency, high throughput keyvalue store
Atomic operations on data types
Transactions
Publisher-Subscriber pattern
LUA scripting
Pipelining
Client libraries in multiple languages
Highly customizable replication
support
Persistence support
Clustering
• From an application perspective, Redis Cache can be accessed via its clients which
are available for most of the popular selections today (Java, Node, .NET, etc.).
• Redis goes beyond simple key/value store to a cache that can contain entire data
structures such as collections, sets, etc. Use Redis to support non-blocking first
synchronization and auto reconnections, and to support replication of the cache to
increase uptime.
Mandatory
• Understand which tier of service will
be required and implement the Redis
client in the application.
Recommended
• Use the advanced structure
caching options with Redis to
simplify the application caching
code.
Optional
• Setup policies for cache rejection,
lifetime, etc.
Azure Search
• While Azure Search does not offer a crawler to index the application data
sources, it does provide the infrastructure to intake the index and provides
interfaces for the actual search functions.
• The service is very targeted at developers, and is not a service that is directly
customer facing.
• At a high level, the following is the graphical representation of the steps needed
to model the Azure Search Service.
• Name, Type, Searchable, Suggestions, Sortable, Retrievable, Filterable, and Facetable.
Mandatory
• Construct and update indexes for
Azure Search consumption via
backend services. PaaS based worker
roles work well for these types of jobs.
Recommended
• Add additional attributes to the
index to support advanced features
such as auto suggestions.
Optional
• Build monitoring data integration to
existing monitors to ensure storage or
indexes don’t exceed the limits for the
service.
Microsoft’s cloud Hadoop offering
100% open source Apache Hadoop
Built on the latest releases across Hadoop (2.6)
Up and running in minutes with no hardware to deploy
Harness existing .NET and Java skills
Utilize familiar BI tools for analysis including Microsoft Excel
HDInsight provides easy-to-use graphical query interface for Hive
HiveQL is a SQL-like language (subset of SQL)
Hive structures include well-understood database concepts such as tables, rows, columns, partitions
Compiled into MapReduce jobs that are executed on Hadoop
Stinger is a Microsoft, Hortonworks and OSS driven initiative to bring interactive queries with Hive
Brings query execution engine technology from Microsoft SQL Server to Hive
Performance gains up to 100x
Microsoft contribution to
Apache code
Sample Query
32x Speedup
Hive 10
Hadoop 2.0
HDP 1.3 /
Hive 11
40X
Speedup
HDP 2.0
100x
Speedup
HDP 2.1
Columnar, NoSQL database
Runs on top of the Azure Blob Stores in HDInsight
Provides flexibility in that new columns can be added to column families at any time
HMaster
Coordination
Name Node
Region Server
Region Server
Region Server
Region Server
Job Tracker
Data Node
Data Node
Data Node
Data Node
Task Tracker
Task Tracker
Task Tracker
Task Tracker
Consumes millions of real-time events from a scalable event broker (ie. Apache Kafka, Azure Event
Hub)
Performs time-sensitive computation
Output to persistent stores, dashboards or devices
Customizable with Java + .NET
Deeply integrated to Visual Studio
Web/thick client
dashboards
RabbitMQ /
ActiveMQ
Stream
processing
Search and query
Data analytics (Excel)
Devices to take action
Managed & supported by Microsoft
Re-use common tools, documentation, samples from Hadoop/Linux ecosystem
Add Hadoop projects that were authored on Linux to HDInsight
Easier transition from on-premises to cloud
• Debug Hive jobs through Yarn logs or
troubleshoot Storm topologies
• Visualize Hadoop clusters, tables, and
storage
• Submit Hive queries, Storm topologies
(C# or Java spouts/bolts)
• IntelliSense for authoring Hive jobs and
Storm business logic
Mandatory
Recommended
Optional
• Create storage accounts for data repositories for
HDInsight. Also, Deprovision your clusters when
not in use, as the costs for running thousands of
cores can quickly add up.
• Get a good sense for your data and what
is most important to your workloads. This
will help guide what components in
HDInsights can be leveraged to take the
best advantage of the platform services.
• Spend more time checking out what has already
been built by the open source communities that
can be used with much less effort than writing
from scratch.
Azure Data Lake Store is an enterprise-wide hyperscale repository for big data analytic workloads.
Azure Data Lake enables you to capture data of
any size, type, and ingestion speed in one single
place for operational and exploratory analytics.
Mandatory
Recommended
Optional
• Use appropriate mechanisms to secure access to
Azure Data Lake Storage via Azure AD
authentication, ARM RBAC feature and/ POSIXstyle permissions exposed by the WebHDFS
protocol.
• When using the Import/Export service, the
file sizes on the disks that you ship to
Azure data center should not be greater
than 200 GB.
• Integrate Azure Data Lake Store with other services – i.e. provision
an Azure HDInsight cluster that uses Data Lake Store as the HDFScompliant storage, use Azure Data Lake Storage as a Azure Data
Factory data source or access it from OSS applications, such as
Apache Sqoop, Apache Storm or Apache Hive.
Azure Data Lake Analytics is a new service, built to make big data analytics easy. This service lets you focus on
writing, running and managing jobs, rather than operating distributed infrastructure. Instead of deploying,
configuring and tuning hardware, you write queries to transform your data and extract valuable insights. The
analytics service can handle jobs of any scale instantly by simply setting the dial for how much power you need.
You only pay for your job when it is running making it cost-effective.
• None
Mandatory
Recommended
Optional
• Use Azure Data Lake Analytics as a
preferred option to derive insights from
massive amounts of data.
• Integrate Azure Data Lake Analytics with other services – use
Azure Blob storage, Azure SQL database, and Data Lake
Analytics as a data source for Azure Data Lake Analytics
queries.
• An experiment consists of dragging components to a canvas, and connecting them in
order to create a model, train the model, and score and test the model.
• The experiment uses predictive modeling techniques in the form of Machine Learning
Studio modules that ingest data, train a model against it, and apply the model to new
data.
• You can also add modules to preprocess data and select features, split data into
training and test sets, and evaluate or cross-validate the quality of your model.
Mandatory
Recommended
Optional
• Bring your R and Python code libraries and
understand how to leverage ML Studio to
provide streamlined development
experience.
• Partition your logic to create
consumable services by using the
platform services of ML.
• Explore what the data science community
has already created and look to extend or
enhance these solutions instead of
creating them from scratch to speed the
development effort time.
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and
transformation of data. It orchestrates existing services that collect raw data and transform it into ready-to-use
information. ADF is used to collect data from many different on-premises data sources, ingest and prepare it,
organize and analyze it with a range of transformations, then publish ready-to-use data for consumption. Beyond
orchestrating the flow of a data pipeline, ADF offers a single unified view to easily pinpoint issues and setup
monitoring alerts.
Mandatory
Recommended
Optional
• Configure a dataset, activity, linked service
and pipeline
• Set up monitoring alerts to be notified
when extraneous events occur
• Use a data management gateway (if
you’re using ADF to access data on
premises)
Stream Analytics is a real time stream processing service used to uncover insights from devices, sensors,
infrastructure, and applications. Streaming millions of events per second, stream analytics can be used to perform
real-time analytics for IoT solutions. Developers can describe their desired event stream processing and
transformations using a SQL-like query language. The system abstracts the complexities of the parallelization,
distributed computing, and error handling.
Mandatory
• Configure a data stream input and an output.
Recommended
• Create a SQL-like query to process data on the fly.
Configure data ingestion service (e.g. event hub)
and stream analytics in multiple environments to
prevent any downtime of application.
Optional
• Use a reference data input as a lookup.
Use Stream Analytics to feed data directly
into PowerBI to create a dashboard view
of your data.
Azure IoT Hub is a fully managed
service that enables reliable and secure
bidirectional communications between
millions of IoT devices and a solution
back end. Azure IoT Hub:
• Provides reliable device-to-cloud
and cloud-to-device messaging at
scale.
• Enables secure communications
using per-device security credentials
and access control.
• Provides extensive monitoring for
device connectivity and device
identity management events.
• Includes device libraries for the most
popular languages and platforms.
Mandatory
Recommended
Optional
• Anticipate the number of messages you expect to
receive per day. The current message limit is 6
million messages/unit/day. The maximum unit
count for S2 IoT Hub is 200. If more units more
required, you must contact Microsoft Support.
• If protocol translation is necessary,
utilize the Azure Protocol Gateway as
a starting point for your solution. The
currently supported protocols are
AMQP and MQTT 3.1.1.
• Understand the differences between IoT
Hub and Event Hub to identify which
service works best for your solution.
Azure Event Hub is an event processing service that provides event and telemetry ingress to the cloud at massive
scale, with low latency and high reliability. This service, used with other downstream services, is particularly useful in
application instrumentation, user experience or workflow processing, and Internet of Things (IoT) scenarios. Event
Hub provides a message stream handling capability and, though an Event Hub is an entity similar to queues and
topics, it has characteristics that are very different from traditional enterprise messaging.
Enterprise messaging scenarios commonly require a number of sophisticated capabilities such as sequencing, deadlettering, transaction support, and strong delivery assurances, while the dominant concern for event intake is high
throughput and processing flexibility for event streams. Therefore, Azure Event Hub capabilities differ from Service
Bus topics in that they are strongly biased towards high throughput and event processing scenarios.
Mandatory
Recommended
Optional
• Event Hubs Standard tier currently
supports a maximum retention period
of 7 days. Note that Event Hubs are not
intended as a permanent data store.
• Do not grant tokens with direct
access that will be used by
devices. This prevents
blacklisting and throttling.
• Consider long term scale needs prior to creating the Event
Hub as the partition count cannot be changed. The
number of partitions must be between 2 and 32. You can
increase the 32 partition limit by contacting support.
Microsoft Azure Media Services is an extensible cloud-based platform that enables developers to build scalable
media management and delivery applications. Media Services is based on REST APIs that enable you to securely
upload, store, encode and package video or audio content for both on-demand and live streaming delivery to
various clients (for example, TV, PC, and mobile devices).
You can build end-to-end workflows using entirely Media Services. You can also choose to use third-party
components for some parts of your workflow. For example, encode using a third-party encoder. Then, upload,
protect, package, deliver using Media Services.
You can choose to stream your content live or deliver content on demand.
Mandatory
Recommended
Optional
• After you regenerate a storage key, you must make sure
to synchronize the update with Media Services.
• If you are just looking to store JPEG or PNG images, you
should keep those in Azure Blob Storage. There is no
benefit to putting them in your Media Services account
unless you want to keep them associated with your Video
or Audio Assets.
• Currently, the max recommended duration of a live event is
8 hours.
Speed
•
In almost every case,
content utilizing a CDN
will be much closer to the
end-user and that will
result in faster delivery
and a better user
experience
Reliability
•
The CDN spreads out
content, replicating that
content across the globe,
resulting in reduced
demand on any given
server
Efficiency
• The CDN optimizes for
delivery, allowing the
customer to focus on the
quality of their content, not
on the delivery of that
content
First
Request
Subsequent
Requests
• Typically CDN networks are used for static content such as media, images, or documents
which are read more than written to. The following are examples:
Mandatory
• Determine regions to target for CDN
and update application to use root URI
of CDN network as opposed to local
content.
Recommended
• Leverage parameters to vary the
caching characteristics and
lifetime of content cache.
Optional
• Map the CDN content to a custom
domain name.
Hybrid Connections are a feature of Azure BizTalk Services. Hybrid
Connections provide an easy and convenient way to connect the
Web Apps feature in Azure App Service (formerly Websites) and
the Mobile Apps feature in Azure App Service (formerly Mobile
Services) to on-premises resources behind your firewall.
Hybrid Connections support the following framework and
application combinations:
• .NET framework access to SQL Server
• .NET framework access to HTTP/HTTPS services w/ WebClient
• PHP access to SQL Server, MySQL
• Java access to SQL Server, MySQL and Oracle
• Java access to HTTP/HTTPS services
Mandatory
Recommended
Optional
• TCP-based services that use dynamic ports
(such as FTP Passive Mode or Extended
Passive Mode) are currently not supported.
• You can scale Hybrid Connections by installing
another instance of the Hybrid Connection
Manager on another server. Configure the onpremises listener to use the same address as the
first on-premises listener. In this situation, the
traffic is randomly distributed (round robin)
between the active on-premises listeners.
• Use Group Policy to control the on-premises
resources used by a Hybrid Connection.
Azure Service
• Service Bus queues are used to provide an option to decouple any processing from
the request pipelines.
• Consider these types of architectures, especially when migrating workloads to the
cloud as loosely coupled applications can scale and are more fault resilient than the
counter.
• Use service bus with a variety of models from simple queues based storage, to
topics, which target and partition messages in a namespace. Consider using Event
Hubs on top of Service Bus to service very large client bases, where inputs to the
bus will include several thousands to millions in rapid succession.
Priority Queue Pattern:
Scheduler Agent Supervisor Pattern:
Mandatory
Recommended
Optional
• Determine which model to use when
storing messages in Service Bus, based
on transaction, lifetimes, and message
rates.
• Modify applications to provide
transient fault handling to
compliment the decoupling of
message posting from message
processing.
• Leverage Event Hubs in addition to
Service Bus to handle large scale intake
of service bus messages.
Considerations
Decision Points
Upgrade Domains
- Configure PaaS Web and Worker Roles services to use multiple
upgrade domains to avoid unnecessary outages when new
deployment upgrades to the application or services are
initiated.
Deployment Slots
- Use deployment slots to “test” new version or upgrades
without affecting the production application.
- Choose to stage slots before releasing to production, which
can enable better testing to avoid downtime.
Web Deploy
- Choose that larger apps require more governance and control
around the deployment.
Continuous
Integration
- Choose continuous integration for larger applications and
organizations that require the automation of deployments. This
allows for both gated check-ins (approval) and continuous
(triggered).
Considerations
Decision Points
DNS Level
- Choose this option when needing to load balance
traffic to different cloud services located in different
datacenters, to different Azure websites located in
different datacenters, or to external endpoints.
- Choose Traffic Manager and the Round Robin load
balancing method.
Network Level
- Choose this option when needing to load balance
incoming internet traffic to different VMs of a cloud
service.
- Implement this solution with Azure Load Balancer.
- Choose between the different options of Internal and
External load balancers.
Considerations
Name Resolution
Decision Points
-
Choose to deploy a DNS solution for VMs and cloud service
in a Vnet.
Decide between the Azure provided name resolution or your
own DNS solution depending on the name resolution
requirements.
Enhances Security and - Use the Vnet as an added layer of isolation to the services,
Isolation
such as VMs and cloud services deployed within the Vnet.
Extended Connectivity - Use a Vnet to extend the connectivity boundary from a single
Boundary
service to the Vnet boundary.
- Consider setting up services that use a common backend
database tier or use a share management service, in a Vnet.
Extend your onpremises network to
the cloud
- Join VMs in Azure to the on-premises domain in order to
access and leverage all on-premises investments around
monitoring and identity for the services hosted in Azure.
Use persistent public
IP addresses
- Consider configuring your cloud services with a reserved
public IP address from the address range when you create it.
Considerations
Decision Points
Cloud-Only
- In this model, no Vnet gateways are deployed
and connection to the VMs and cloud services is
done though the endpoints rather than through
a VPN connection.
Cross-Premises
- In this model, you can create multi-site
configurations, Vnet to Vnet configurations,
ExpressRoute connections allowing you to
leverage on premises connectivity and resources.
Considerations
Auto-Scaling
Decision Points
- Consider utilizing monitoring and automation
capabilities such as the Azure Monitoring Agent and
Azure Automation to dynamically scale and deploy
application code to cloud service instances.
Load Balancing
- Consider creating and internal load balancer to the
cloud services provisioned and associate them with the
cloud service endpoint.
Density
- Leverage multiple subscriptions to provide the proper
level of segmentation and avoid hitting limits (e.g. total
cloud services per subscription is 20)
Considerations
IIS Logs
Azure Diagnostic
Infrastructure Logs
Decision Points
- Consider gathering information about IIS web sites
- Consider gathering information about diagnostics itself
IIS Failed Request Logs
- Consider gathering information about failed requests to an IIS site or
application
Windows Event Logs
- Consider gathering information sent to the Windows Event Logging
System
Performance Counters
Crash Dumps
- Consider gathering OS and custom performance counters
- Consider gathering information about the state of the process in the
event of an application crash
Custom Error Logs
- Consider gathering logs created by your application or service
.NET EventSource
- Consider gathering events generated by your code using .NET
EventSource class
Manifest Based ETW
- Consider gathering ETW events generated by any process