Thursday, 27 May 2021

OAAM 11gR2PS3 Checkpoints - How are they executed

How Checkpoints are executed in OAAM

Checkpoints in OAAM are like barriers that one need to cross to move ahead, and once you are been verified fully only than you are allowed to pass.

Basically in OAAM we have different kind of checkpoints available that helps the system to know about the user in a better way.

I have explained the entire flow of these checkpoints in below video, kindly watch & share your comments.



Hope this helps :-)

Enjoy :-)

OAAM 11gR2PS3 Conditions Types

OAAM Conditions Types

With each condition defined in OAAM, there is a type associated with it. These types define the behavior of a condition. And based on this we ca actually use a condition with a rule.

Condition Types are explained in below video, kindly watch & please share your comments.



Hope this helps :-)

Enjoy :-)

OAAM 11gR2PS3 Conditions & Conditions Types

OAAM Conditions

OAAM provides pre-packaged conditions that are to be used while defining rules. It's the very important part of any policy i.e. defined in OAAM.

Conditions can't be created by an admin, they are fixed & can only be modified only in terms of the output we want from that condition.

I have explained what conditions are in below shared video, watch & please share your comments.



Hope this helps :-)

Enjoy :-)


Saturday, 19 January 2019

Understanding Blue Green Deployment

What this blue-green all about?

This is a way of switching traffic from one deployment to another one. That means say you have a new version of software to roll out, which has been successfully tested in staging environment. Now you want that to go-live, so here in kubernetes you have this magic word blue-green deployment.

Definition: "A blue green deployment uses the service label selector to switch all traffic from one deployment to another."

If above stated definition is cryptic, than lets see an image view to understand it;



Here if you notice, we have app:hello version:1.0.0 is currently deployed. Now let's say we have a new version i.e. 2.0.0 of hello app;


First test you new deployment, i.e. version 2.0.0. Once you have verified it, it's time to switch live traffic to version 2.0.0 deployment.

Now let's get back to the definition, we discussed previously. It says use service label selector to switch traffic. Awesome we are on track understanding traffic switch.


Here we have successfully switched to new version 2.0.0 with help of "selectors & labels". That means if we want to get all this done, the mantra is to understand "labels & selectors".

Got it, now you must be asking where the heck is this labels & selectors discussion. Need not to worry we will soon see a video tutorial because that needs much attention.

Very well, we have made a basic understanding of blue-green deployment.

Enjoy :-)

Setting up NGINX Plus openid connect with IDCS on Google Cloud





This tutorial describes the nginx plus openid connect feature working with IDCS (also known as OICS) on google cloud platform.

It features the following:
a) Creating NGINX Plus VM Instance on google cloud.
b) Setting up nginx plus openid connect environment.
c) Configuring a client on IDCS.
d) Executing nginx plus configure script.
e) Execute 3 legged flow & Identify the bug with NGINX Plus.
f) Workaround for the identified bug.
g) Working Demo


Issues identified link updated:

  • https://github.com/nginxinc/nginx-openid-connect/issues 

Enjoy :-)


Purposed Model of Continuous Integration & Continuous Delivery & Deployment

Continuous Integration & Delivery: From Dev Team Perspective

  •   Step1: Developer starts working on a code fix/enhancement.




  1. Developer commits code to development branch
  2. Build process get kicked off along with unit tests are executed.
  3. Result of Step 2 is a docker image.
  4. Container image gets uploaded to container registry such as GCR (google cloud registry).
  5. This latest image needs to be deployed on Dev env.. This can be done with Kubernetes engine by following:
    1. Manually - Update the pod configuration.yaml file with the latest docker version.This will create a new POD with latest image.
    2. Automation - Write a serverless function which will have a cronjob polling the container registry to check for latest image. If found will update the pod config & result will be a new POD with latest image.
  6. Perform tests on dev env. deployed with latest image.
    1. Here integration tests can be triggered manually or by automated way (using jenkins/spiannker).
    2. As well as perform manual tests

Step 2 - Developer find an issue while testing the code fix (performed in Step 1)




  1. Developer finds an issue while testing the image generated in Step 1.
    1. Might be the integration tests got failed. Or,
    2. Issues with image deployment. Or,
    3. Issue caught while manual testing.
    4. Etc.
  2. Fix the code again, & commits code in dev branch.
  3. Build gets triggered, unit tests are performed. And a new image gets generated.
  4. This image gets uploaded on container registry.
  5. New image having code fix needs to be deployed in dev. env.
  6. Developer retest the code fixes, 

Step 3 - Testing Completed, now merge the changes in master branch

  1. Now its time to commit the code in master branch. As all tests are passed with recent fix made.
  2. Same steps will be followed as described above.
  3. Just one change will be here that the container registry will now have a public release.
    1. Initially it was for testing purpose & scope of that image was internal use only.
    2. Now as the changes are finalized, it has to be available for public use.
    3. Public use may or may not be be restricted as per the management decision.

Continuous Deployment:

  • With continuous deployment - comes continuous challenges of;
    • How an update is rolled out?
    • Does this update needs to be rolled out completely or partially? This brings the concept of Canary Deployment.
    • How to switch the traffic from old version to new version? This will bring in the Blue-Green Deployment.
  • Below is the basic possible deployment flowchart, briefly describing how the update rollout happens?


  1. 5(a) Container image is now ready to be deployed to canary deployment.
  2. Container image promoted to canary.
  3. Once a set of users verify that latest deployment on canary is working fine, it needs to be deployed on production.
  4. Container image promoted to production.
Further Info:
References:

Enjoy :-)

Understanding Updates Rollout in Continuous Deployment

As we discussed in the << Purposed Model of continuous integration & deployment >> post about the process that how a developer performs code change/fix, how it gets propagated to the pipeline, how this become part of delivery & deployment.
In continuation to that there arises a point of how an update is rolled out, what are the possible ways to do that & how that can be benefited?

Let's start the journey with a possible deployment architecture explaining that how an update is rolled out:

Note: Here in this deployment example we will consider a replica set of 3 identical pods having same image i.e. "hello1".
  • An update of new image is available from the container registry, this needs to be rolled out in the deployment.

  • Now we have a new updated image say "hello2". In this case we will tell our kubernetes master to create a second replica set that will have containers with image "hello2".


  • You will notice that with creation of 2nd replica set the service pointing to replica set (1) will gradually start pointing to replica set (2) pods.


  • First replica set pods will start decreasing & second replica set pods will increase.



Note: At most in this deployment we will have 4 pods at a time & at least 3 pods.




  • Finally you will observe all 3 pods of replica set (2) are created & you are left with last POD of replica set (1) that too will be vanished soon.

  • Finally the new image version is rolled out



References:

Enjoy :-)