Search This Blog

Remember1

  • If you need to download the blob than you only "Storage Blob Data Reader" Role
  •  AAD- Manifest file, property allowpublicClient must be true in case of SPA.  
  • AAD-Manifest file, property oauth2AllowImplicitFlow must be true incase of SPA
  • Currently Linux is the only OS that supports container instances to be available from a virtual network.
  • copy element to create multiple instances used in ARM templates. 
"copy":{
"name": "<name of the loop>",
"count":<number of iterations>, -- start from 0

  • function copyIndex() - to refer multiple instance in ARM template 
  • Use docker tag command to create an alias of the image with the fully qualified path to your azure container registry. 
"docker tag nginx myregistery.azurecr.io/samples/nginx"

  • Azure functions filter for getting triggered when ever a file of extension type .png is in data container of blob
function.json 
"path": "data/{name}.png"

  • Azure CLI statement to fetch the required connection string of Azure event Hubs
az eventhubs namespace authorization-rule keys list --resource-group dummyresourcegroup --namespace-name dummynamespace --name RootManageSharedAccessKey

or

az eventhubs eventhub authorization-rule keys list --resource-group dummyresourcegroup --namespace-name dummynamespace --eventhub-name dummyeventhub --name RootManageSharedAccessKey

Azure Storage Table

Table (this is not Azure SQL database) 

  • Its a Structured and NO - SQL database
  • Key values are stored in this
  • cost is lesser than usual sql database for same amount of data. 
  • Not useful if we have complex join and stored procedure type requirement. 
  • Nuget package requierd to work on azure tables - Microsoft.Azure.Cosmos.Table

Azure Storage

  • Different type of storage account services
    • Blog - Disc files of VM, video files, audio files. 
    • Table (this is not Azure SQL database) 
      • Its a Structured and NO - SQL database
      • Key values are stored in this
      • cost is lesser than usual sql database for same amount of data. 
      • Not useful if we have complex join and stored procedure type requirement. 
    • Queue and
    • File
  • Types of storage account 
    • General purpose v2 accounts - Recommended by Azure. All services like blob, table, file, queues are available.  hot, cool and archive tiers
    • General purpose v1 accounts- Gives all services like blob, table, file, queues are available. but not hot, cool and archive tiers
    • BlockBlobStorage account - 
    • File Storage account - Premium account when only file storage is required. 
    • Blob Storage account - Legacy account. 
  • Table service is available for both Storage account and Cosmos DB
  • SAS(Shared Access Signature) - We generate SAS when we want to share particular blob (not container). Share via access key or connections string give full container access. Share via Iam(share based on role, RBAC) give you access at storage level and container level. 
    • We can generate SAS at account level as well. 
=========================================================================
Blob
  • Creation of container is required. 
  • 3 types
    • Block blobs - Used for storing text and binary data
    • Append Blobs - fog logging data
    • Page Blobs - Virtual hard disc of VM. 
  • Replication Techniques
    • LRS- Local redundant storage synchronous 3 times within physical location of single data center
    • ZRS - Zone. Synchronous 3 times across zone. data available in case of data center failure. 
    • GRS - Primary region and secondary region. secondary is available only when primary is down 
    • Ra-GRS - Both secondary and first are available but second is read only. 

Classes used for Blob storage - Recent version
Azure.Storage.Blobs v12.4.1 dll

1) BlobServiceClient Class - This is used as a logical representation of the Azure Storage account and provdies access to work with the Blob service.
You can use the Constructor of the BlobServiceClient class to pass in the connection string to connect to the Azure Storage account.

You then have methods such as
a) CreateBlobContainer and CreateBlobContainerAsync- This can be used to create a container
b) GetBlobContainerClient - This returns a reference to an existing container

2) BlobContainerClient - This is used as a logical representation of the container service in the Azure Storage account
You then have methods such as
a) GetBlobClient - This is used to get a logical representation to a blob
b) GetBlobsAsync - This is used to return a list of blobs in a container. Each blob is returned as an object of the class BlobItem


3) BlobClient - This class is used to work with Azure Storage Blobs
You then have methods such as
a) Upload and UploadAsync - This can be used to upload a blob onto the container
b) Download and DownloadAsync - This can be used to download a blob from the container


4) BlobItem - This class is used to represent a Blob in a container.
For the BlobItem class , you have properties such as Metadata, Name , Properties and Snapshot

Key Vault

  • This service is used to manage all the secrets like Encryption keys, connection passwords, Certificates. 
  • Service principle is created in AAD like a user and is assigned rights to fetch details from azure key vault. 
  • Azure keyvalue can be used to do the disk encryption (VM disc ie D drive). it wil put lock symbol on drive. 
  • RBAC(Role based access control) vs Permission in azure keyvault:
    • RBAC come first in picture means it might be the case because of RBAC in picture use has no access to keyvault or has limited access to key vault. 
  • For an application defined in Azure AD to access a storage account, you have to implement the permission of user_impersonation. Here the permission of the logged-on user would be used to access the storage account

Azure Security

 Azure Active Directory
  • We work with users and group in AAD. 
  • Pricing tiers
    • Free
    • Office 365 apps
    • Premium P1
    • Premium P2
  • Role Based Access Control (RBAC)
  • IAM (Identity and access management)- Here we can give permission at resource level and assign that resource to a particular group as well. 
  • We can apply access group at resource group, resource and at subscription level also. 
  • Built in roles
    • Owner lets you manage everything include access to resources
    • Contributor lets you manage everything excepts access to the resources
    • Reader lets you view everything but can not make changes. 
  • OAuth2.0
    • Standard protocol used for authorization incase we are using external identity providers to login to our application. 
    • It provides definition of authorization workflows for different application like web application, mobile application, windows application etc 
    • Workflows 1) OAuth 2.0 Authorization Code flow 2) Oauth 2.0 Implicit Work flow
  • OpenID Connect
    • Just for authentication
    • Lots of people use OAuth2.0 just for authentication instead of authentication and authorization. To avoid misuse of OAuth 2.0 OpenID Connect came into picture. 
  • OAuth Different entities
    • Resource Owner – This is typically the end user.
    • Resource Server – This is the server that is hosting a protected resource.
    • Client – This is the application that is requesting the use of the protected resource on behalf of the user.
    • Authorization Server – This is the entity that authorizes the Resource Owner and issues access tokens.
========================================================================
The Authorization Code Flow

  • Here the application redirects the user to the Authorization Server.
  • The user completes the authorization steps presented by the Authorization Server.
  • The user is then redirected back to the application with an authorization code in the query string.
  • The application then exchanges the authorization code for the access token.
The different query string parameters
  • response_type=code – This tells the authorization server to initiate the authorization code flow.
  • client_id – This is the public identifier of the application.
  • redirect_uri – This tells the authorization server where to send the user back to after the request has been approved.
  • scope – This is one or more space-separated strings that indicate the permissions the application is requesting for.
  • state – This is a random string generated in the request. This should then be checked by the application after the user authorizes the application. This helps to prevent Cross Site Request Forgery attacks.
Next the application makes a POST request to the token endpoint to get the token
  • grant_type=authorization_code– This tells the token endpoint on the authorization server to use the Authorization Code grant type.
  • code – This was the code that was exchanged in the initial redirect request
  • client_id – This is the public identifier of the application.
  • client_secret – The application’s client secret

Azure Functions

 Azure Functions
  • Azure functions are serverless. Serverless means - You don't have worry about hosting, scaling, monitoring, managing and maintaining as all these things will be done by Microsoft Azure.
  • Service allows you to run small piece of code as function. 
  • We develop the code and deploy the functions. 
  • We get billed only for the amount we use the functions. 
  • Storage account: For azure functions you must have storage account in place. 
  • Azure functions invoke
    • Timer Trigger
    • Http Trigger- trigger when http request came. 
    • Blob Trigger - when new blob is created or exiting blob is updated
    • Queue Trigger - When there is a message is azure storage queue. 
    • Service bus Trigger - Where there is a message in service queue or topics
  • plans: 
    • Consumption plan: Pay only for the time code runs. 
    • App service plan: Re use the exiting plan for cost efficient. Always On Functionality is present in app service plan. 
    • Premium plan: More compute when required. 
  • Azure Durable Functions
    • Trigger functions --> Orchestrator Functions --> Activity Function
  • Batch size can be changed from host.json
Azure functions are best suited for smaller apps have events that can work independently of other websites. 

Some of the common azure functions are sending emails, starting backup, order processing, task scheduling, image processing, run task on schedule, such as database cleanup, sending notifications, messages, and IoT data processing\

Real life example: We send two parameters like location code and Total Amount and based on this azure function does some calculation behind the scenes and bring up the percentage of variable value we have to apply. it's a small calculator function. We are using Http trigger for this function, and we are passing two parameters as query string to these functions. Function is writer in C# deployed on Linux box. 


----------------------------------------------------------------------------------------------------------------------------

Always On Functionality of App service plan

In order to ensure the function is invoked immediately, make sure the Azure Function is part of an App service plan and the Always On setting is set as on. for example, we want to trigger a function whenever there is an entry in blob storage.
 




Azure Web APP and App Service

 Azure App Services or Azure Web App:
  • Deploying your web application on azure app service is PAAS
  • Azure Web App is a type of Azure app service. 
  • To customize deployments, you can create a file in the root of the repository that is named .deployment. You can also use the run folder of Webjobs to generate the content. for example: let say you want to run a script file that will generate some static files which will be used in app. so you need to create this static files before your app actually start accepting any traffic. In that case you have to mention this static file creation step in .deployment file or you can use the run folder of webjobs. 
  • Benefits we get from PAAS(Platform as a service)
    • No need to maintain underlying infrastructure like VM etc
    • It has features like auto scaling 
    • Security
    • Devops capabilities like continuous deployment. 
  • App Service plan: When we deploy our application on Azure app service we take use of another resource that is app service plan.
    • Free
    • Shared
    • Basic
    • Standard- Auto scale starts from Standard
    • Premium 
    • Isolated - Network isolation is available. 
    • App service linux


  • Connection String: Instead of getting connecting string from web.config file in .net projects we have connection string in web app as well. 
  • Remote Debugging: You can choose remote debugging and visual studio version to do debugging. 
  • Authentication: We can enable authentication using multiple ready providers 
    • AAD (Azure Active directory)
    • Microsoft Account
    • Google
    • Facebook
    • Twittter
  • Backups: you can take backup of your application as well but it is possible in case the app service plan is standard or higher. 
  • Diagnostic feature: We can get all the application logging. 
    • Web server logging: Raw HTML request data. 
    • Application logging: log messages generated by application logging.  eg 
      • System.diagnositc.trace.traceinformatoin("this is logging");
    • Failed request Tracking: Detailed traced data on failed requests .
    • Deployment logging: 
  • SSL: We can use SSL (Secure service layer) for more sercurity. We have concepts of Binding here  
  • CORS: We can enable CORS for application hosted on our web app. eg let say we have api(ApplicationA) hosted on azure web app and we are trying to fetch its data from another app (ApplicationB) hosted on different domain. Then we will see CORS issue but if we give domain address of ApplicationB on CORS section of ApplicationA which is a web app issue will be solved. 
  • Deployment Slots: production and staging slot. Both prouduction and stagning slot have different URL and thus can work in parallel.  








  • Area1 - config
  • Area2 -  --docker-container-logging
  • Area3 -> Webapp
  • Area4-> tail
















  • Steps to deploy webapp
    • Create a resource group
    • Create a service plan
    • Create a web App
    • Deploy the application from 
      • Visual studio or 
      • Github or
      • set the docker image which will be deployed as a container. 
    • Path
      • If GITHUB, Give the repository URL for Github
      • If docker, give docker container path
    • (If required) Create a deployment slot
    • (If required) Set the DNS name 

Azure Compute Service

Virtual Machine
  • When you deploy a virtual machine, things which get deployed automatically are Virtual Network, NSG, The Disks, The VM, Network interface, Private IP address, Public IP Address
  • RDP : 3389
  • Inbound port rules: Go to the networking section on left blade. IIS lives on port 80 and protocol TCP
  • Port: 8171 as inbound port rule need to allow if you want to publish code on VM using visual studio
  • PuTTy: Using this tool we can connect to our Linux based Virtual machine. Use port 22 and SSH key with public IP address. 
  • ngInx: Linux based webserver like we have IIS on windows machine. 
  • Generalization of Virtual Machine: Creating your own private image of virtual machine. Once you create a image out of VM that VM can not be further used although the image created out of VM can be used. You need to run sysprep on VM to make ready for image preparation. Now you can create image of VM using the Capture button.  This image file(not backup file) will be in Azure blob storage. 
  • Recovery Services Vault: It must be in same region as your VM. It creates a back of your VM. 
  • Containers
    • They are consist of OS itself, any important library required to run the application and application code. 
    • Multiple containers can run on a single VM. 
    • Size of OS in containers is far less than the size of actual OS which is part of VM. eg. if VM OS size is 1GB than container OS would be around 150MB. 
    • Containers can be easily moved from one VM to another. 
  • Docker
    • Its a most common tool set for managing Containers
    • It is used to create, deploy and run application on containers. 
    • You can create a common images like images of common libraries and used it in your containers. 
    • Azure Container Registry: you can push your application container images to Azure container Registry and re use them later. The docker push command can be used to send images onto the Azure container registry.
  • Azure Container Instances:
    • This is a service that allows you to deploy container in isolation. 
    • It means if you have image of your application as a container, Azure container instance can simply run your image in matter of minutes. 
    • You can persist the data using azure file share. 
  • Area1 - FROM, First we have to find the base image for our container. 
  • Area2- ENTRY POINT, last command to run the application when container starts up



ARM Templates
  • They contain data in JSON format. 
  • Various sections of ARM templates-
    • Resource : used to specify the resource we want to deploy
    • Variables : Values that can be used within template. 
    • Parameters: provide values during deployment phase. 
    • Output: return values from deployed resources. 
--------------------------------------------------------------------------------------------------------------------------
Docker Steps
  1. The first statement in the Dockefile must be the FROM statement to specify the image to use as the base image.
  2. Then specify the Image working directory
  3. Then copy all of the application contents using the COPY command
  4. then use the CMD command to run the PowerShell command 
  5. and the ENTRYPOINT statement to run the dotnet application. 
The sequence of steps would be as follows

FROM mcr.microsoft.com/dotnet/core/sdk:3.1
WORKDIR /apps/demoapp
COPY ./.
CMD powershell ./setup.ps1
ENTRYPOINT [“dotnet”,” demoapp.dll”]

--------------------------------------------------------------------------------------------------------------------------
Generalization of Virtual machine (Deep understanding)
Once you run the Sysprep tool to generalize the VM, you can issue the following steps to generalize the VM

// Stop the Virtual Machine
Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force

// Get a handle to the virtual machine
$vm = Get-AzVM -Name $vmName -ResourceGroupName $rgName

// Create an image configuration
$image = New-AzImageConfig -Location $location -SourceVirtualMachineId $vm.Id

// Create the custom image
New-AzImage -Image $image -ImageName $imageName -ResourceGroupName $rgName

Azure Logical Apps + Notification Hubs

  • Azure Logical Apps : It is a serverless cloud service that helps you schedule, automate, orchestrate tasks, business process and work flow.
  • You have create a workflow in design in azure logical apps with very minimal coding or no coding at all. Workflow means how to arrange the steps so that a complete e2e busines logic can be created. 
  • Non technical persons can use azure logical apps for automation. 
  • We can trigger workflows based on events or timers and leverage connectors to integrate applications. 
  • Azure logical apps can be easily integrate with azure functions. 
  • Azure logical apps can also integrate apps and data between cloud and on primises systems. 
  • There are many prebuild template to get start working with azure logical apps. 
  • Once we create logical apps we have triggers eg When message recived in a azure message queue, HTTP request is received, Event grid resource event occurs, Email recived in outlook. 

Azure Logical Apps
  • In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view.
  • Designer-first (declarative) development
  • Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios
  • Logical Apps works on concept of connectors. There are multiple connectors already available in market. you use connectors to communicate between different logical units. 
  • Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed

Serverless means - You dont have worry about hosting, scaling, monitoring, managing and maintaning as all these thigns will be done by Microsoft Azure. 

Types of triggers we can in azure functions (can be used with logical apps as well)
Queue trigger
timer trigger
event trigger
http trigger... etc

Example: We have created a logical when ever we processed the claim. we put it back to kafka stream (azure messaging service can be used also) and from there our logical app send one update information to claim owner. 

  • Azure Notification Hubs allows you to send notifications to any platform (iOS, Android, Windows etc) from any back-end platform.
  • Notifications allows users to get information , especially on their mobile devices , for any sort of desired information.

Area1 = NotificationHubClient
Area2 = NotificationHubClient
Area3 =CreateClientFromConnectionString
Area4 = SendWindowsNativeNotifcationAsync


API Management

  • API Management
  • Components: 
    • API GW: Azure API management is a matured API gateway. eg we use it with microservices architecture. 
    • Developer Portal: It's a website for your consumer. It is like a matured/ modern Swagger. It allows you to see the definations of your api. Submit request, get response from apis. You can customize the look and feel of a portal. 
    • Az Portal: Mocking of APIs is possible. Thus we can do testing of system to stimulate the backend system. 
APIM as  Gateway:
  • Response Cache: APIM can be used to cache the response and thus decrease the load on APIs. It has built in default cache mechanism. You can also configure cache policies like duration of cache and how response must be cached. 
  • Security: 
    • Authentication, Authorization using OAuth 2.0 or OpenID connect can be implemented here. 
    • APIM can be integrated with Azure EntraID. 
    • Rate Limiting and throttling. 
      • Error 429: Meaning too many request/sec. helps to stop this error by setting up the throttling point. It could be in 2 ways
        • Amount of data transferred. 
        • Rate of calls per Sec. 
    • IP Filtering and Geo Fencing. 
    • Protects the backend API from Public direct attack. 
    • Content filtering and validation can be done on request and response going through the APIM
  • API Management Policies: Policies are collection of XML statements they are used to execute an operation on request or response of an API.
  • Different Policies:
    • Check HTTP header: forces a particular format of http headers. 
    • You can use the validate-jwt policy to validate the JSON Web tokens for Open ID Connect authentication.
  • Policies: Inbound, outbound and backend [IMP]
  • Inbound Policies – These policies are executed when the API management API is called. This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service
  • Backend Policies – These policies are executed when API management calls the Backend APIs. Forward the incoming request to the backend service specified in the request context
  • Outbound Policies – These policies are executed when API management returns the response to the caller.
  • To use an existing API (which uses an Open API specification) behind the Azure API Management service, you can use the Import-AzApiManagementApi command.
Pricing:
  • Developer: 
    • Non production only, 
    • No scaling, no SLA, No Service credit
    • YES AD integration.
    • 500 request/ Sec
  • Basic
    • 2 scaleout 
    • 50MB cache
    • 1200 request/ Sec
  • Standard
    • 4 scaleout 
    • 1GB cache
    • 2500 request/ Sec
    • 5 times expensive than Basic
  • Premium
    • 10 Scaleout per region
    • 5GB cache
    • Mutli region development. 

Azure Event Hubs

  • Azure Event Hubs:
  • Azure Event Hubs is a highly scalable data streaming platform and event ingestion(digesting) service capable of receiving and processing millions of events per second. Event Hubs can process and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored using any real-time analytics provider or batching/storage adapters. With the ability to provide publish-subscribe capabilities with low latency and at massive scale. Example use in IOT
  • The service can be used to receive and process millions of events per seconds. It is a bigdata streaming platform and event ingestion service. 














  • Events hubs helps to manage data coming from multiple devices. 
  • If we want to send messages to event hubs i.e from .net core app we use the nuget package Azure.messaging.EventHubs
  • EventHubProducerClient is used to send data to event hub. 
  • EventHubConsumerClient is used to consume the data present on event Hub.
  • minimum 2 partition is required in event hubs. it is recommened to have 1 consumer for each partition.  Maximum number of partition could be 32.
  • We can not go and delete the messages from Eventhub even when we read the messages. The only way to delete the messages is to set the retention period of event hub.
  • Offset: We must write our consumer in such a way it must remember where it last read the messages from event hub or it must remember the last batch our consuer read from event hub as in event hub we can not directly delete the messages. So old messages will be still avaliabel to consumer if those old messages are yet not delete. This is called offset, we must remember the offset value to start with. EventPostion.FromSequenceNumber()  is the method used to get the offset value.  
  • We can track of offset by using Azure.Messaging.EventHubs.Processor instead of tracking by ourself in our consumer code. this stores the value of offset in azure sotrage account. 

Azure Service Bus

Azure Service Bus
  • It is a messaging service in azure. Usually data is in form of JSON but it could be in text, xml as well. 
  • Two types of service bus used in azure 1) Queue and 2) Topic. 
  • Azure service bus queue is different from Azure Storage Queue.  The difference between queue service in Azure Storage and Queue service in Azure service bus is - In Storage queue we send data as string whereas in service bus queue we send data in bytes. 
  • Publisher will go ahead and add a message in a queue and consumer will consume will go ahead and consume a service from a queue
  • Topic: Consumers in case of Topic are known as subscribers. This is the same subscribe logic used in case you subscribe any youtube channel. It means behind the scene Topic is implemented there. 
  • Difference between Receive and Peek message in Queue
    • Peek- Will not actually consume the message. It will only see what is the first message in a queue. 
    • Receive will consume the message and Queue count will become 0. (Note: it will not go in dead queue)
  • Name space requried to work with Azure service bus when want to consume  it
  • Connection String-
  • If we want to give access to all Tasks and Queues in one go. Use the Shared Access policy at Service Bus Namespace level. 
  • Otherwise we have SAS Policy at Queues and Topic level as well. 
  • <<space>>
  •  IQueueClient _client;
                _client = new QueueClient(_bus_connectionstring, _queue_name);
  • In the background these objects make use of the MessagingFactory object. This provides the internal management of connections.
  • Closing and opening a connection using queueclient on azure service is a costly process so do not open and close the connectdoin with every work instead use the same queueclient object for multiple purpose. 
  • Incase of Topic we use TopicClient. To create a subscription we use SubscriptionClient Class
  • TimeToLive : -  In case of queue Consumer has 1 day(timetolive) to consume that message from the queue. After the timetolive expries message can go to DeadLetterQueue or can be completely removed from the queues. 
  • LockDuration: Message will be invisible to period of lockDuration, it means after lockduration message will be avalible for others consumer in case it was not deleted by old consumer. 
  • There are 3 parts of message u send to azure message queue 1) Body(this is in byte and this is the main data we want to transfer (2) Broker properties (3) User Properties 
  • by default, the triggers such as BLOB, queue triggers will be retried upto 5 times. After the fifth retry, the triggers sends a message to a special poison queue.
Topics
  • Filters for subscribers : In case we are sending messages to Topic we can create filters for subscribers so that subscribers can filter out what kind of messages they want to receive.  
  • By default SQL filter is implemented with value of filter is 1=1. it means a subscriber will get all messages. IF you remove the default filter then a subscriber will stop getting any messages. 
  • Types of filters : Boolean filters, SQL filters, Correlations filters
  • Always prefer correlation filters over SQL filters whereever possible for better performance. 
  • CorrelationID - This can be used to correlated multiple messages together. 
  • ManagementClient class is used to create/ Manage the new subscription for topic in azure bus service.  
  • How you can enable duplicate detection on your messages? -- First enable the duplicate detection and then enable the message id property in your body. duplicate detection will check the message id for duplicity. 
  • This is used when you want to manage messages. like creation of Queue or Topic


  • We have to use Message class to construct a message
  • We have to use "Encoding.UTF8.GetBytes” method before sendthing the message to azure service bus queue
  • use the “SendAsync” method to send the message to the queue.
  • The messages in Azure Event Hubs and Azure Storage Queues can only last for a maximum of 7 days.