Thursday, 1 July 2021

Testing 2 way ssl with openssl s_client

The intent of this post is to learn how to use openssl s_client program to test 2 way ssl between client & server.

Here I am assumig you have configured your server for 2 way SSL & you have generated or gathered the required certifcates.

List of files required;

a) client certificate
b) client private key -> if passphrase is used you must know that
c) root ca public certificate -> i.e the ca authorty who has signed the server certificate that you will get while handshaking.


Openssl s_client - 2 way ssl test

bash> openssl s_client -connect abc.com -CAfile ca.cert.pem  -key client_key.pem -cert client_cert.pem -tls1_2 -state -quiet
Enter pass phrase for client_key.pem:

SSL_connect:before/connect initialization
SSL_connect:SSLv3 write client hello A
SSL_connect:SSLv3 read server hello A
depth=0 C = XX, L = Default City, O = Default Company Ltd, CN = abc.com
verify error:num=18:self signed certificate
verify return:1
depth=0 C = XX, L = Default City, O = Default Company Ltd, CN = ca.com
verify return:1
SSL_connect:SSLv3 read server certificate A
SSL_connect:SSLv3 read server key exchange A
SSL_connect:SSLv3 read server certificate request A
SSL_connect:SSLv3 read server done A
SSL_connect:SSLv3 write client certificate A
SSL_connect:SSLv3 write client key exchange A
SSL_connect:SSLv3 write certificate verify A
SSL_connect:SSLv3 write change cipher spec A
SSL_connect:SSLv3 write finished A
SSL_connect:SSLv3 flush data
SSL_connect:SSLv3 read server session ticket A
SSL_connect:SSLv3 read finished A
SSL3 alert read:warning:close notify
SSL3 alert write:warning:close notify

Note: ca.cert.pem is the root ca public certificate while other 2 are the client cert & client private key which is having passphrase.


Hope this helps :-)
Enjoy :-)

Creating user certificates with encrypted private key using openssl

The intent of this post is to list the steps to generate a self signed user certificate that has an encrypted private key with a passphrase.


Generate private key with passphrase

bash> openssl genrsa -des3 -passout pass:1234 -out client_key.pem 2048
(it has to be atleast 4 characters long)

To verify that this is encrypted private key, easy step is to open this private key in an editor & it will have content like;

-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,974D80EBEF938726

hWANCxIG3lT1qaoTqza84pk10JeGD2vUXoVRj92WI2k+eYJvVhnW/tz5cZzNeozu
............................................
............................................
............................................
-----END RSA PRIVATE KEY-----

Generate csr using above generated private key

bash> openssl req -out client.csr -new -nodes -key client_key.pem -sha256
(to proceed, it will ask you for the private key passphrase)


Self Sign the user certifcate with Root CA

bash> openssl x509 -req -days 360 -in client.csr -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -out client_cert.pem -sha256
(you will be asked for ca cert key password)



Hope this helps :-)
Enjoy :-)

Thursday, 3 June 2021

How to block Blacklisted User with OAAM PreAuthenticationCheckpoint

Block Blacklisted User with OAAM Pre Authentication Check


We can block blacklisted users using rules in OAAM. And let's say we want to do this at pre authentication checkpoint, we can add a blacklisted user in a group which we can attach to a condition & that condition will be attached to a rule. For us all this enablement in OAAM gets pre seeded (i am assuming you have imported the snapshot). View this video & get a basic understanding of how policies, rules & conditions come into action at real time.



Hope this helps :-)

Enjoy :-)

How to block a blacklisted IP/IP Range with OAAM Post authentication check

Configure Blacklisted IP in OAAM

We can block ip or range of ip's at post authentication checkpoint. This use case helps you to configure what rules, conditions & groups help you to achieve this in OAAM.


Below video demonstartes how to achieve the usecase;



Hope this helps :-)

Enjoy :-)

How OAAM Scoring Engine Works?

 What role does scoring engine plays? What is the exact flow of scoring mechanism?

To determine a risk score, each level applies its scoring engine to the results from one level below. For example, to determine the policy score, the scoring engine of the policy is applied to the scores of the rules within the policy. To determine the checkpoint score, the scoring engine of the checkpoint is applied to the scores of the policies within the checkpoint. The checkpoint score and action are the final score and action in the assessment. The alerts are propagate from the rules level to the final level.

I have prepared a video series explaining the role, need & work flow of scoring engine. Kindly watch & provide your comments.







 
Hope this helps :-)

Enjoy :-)

OAAM Policy Weights

 What role does policy weights play in OAAM?

Weight is the multiplier values that are applied to policy scores to influence the impact the policy has on determining the total score. Policies have default weights. Weight is used only when a given policy or checkpoint uses a "weighted" scoring engine. The weighted scoring engine uses weights from subcomponents.

For example, if you choose the weighted scoring engine at the policy level, Oracle Adaptive Access Manager uses the weight specified for each rule level when calculating the policy score. Similarly, when you choose a weighted scoring engine at the policy set level, Oracle Adaptive Access Manager uses weights specified for each policy. The score of each policy multiplied by weight is divided by total number of policies multiplied by 100. The range is 0 to 1000.

I have explained the functioning of policy weights in below shared video. Kindly watch & let me know if you have any comments.



Hope this helps :-)

Enjoy :-)

Thursday, 27 May 2021

OAAM 11gR2PS3 Post Authentication Checkpoint

Post Authn Checkpoint

Post authn checkpoint is a really important step in checkpoints flow. How this need to be configured & what could be the different outcomes of this checkpoint are explained in below video.




Hope this helps :-)

Enjoy :-)

OAAM 11gR2PS3 Checkpoints - Basic Understanding

OAAM Checkpoints Part-1

With help of checkpoints, one enforces policies that are to be executed on each check made by a checkpoint in OAAM.

They are fix in number but what you can do & enforce is completely configurable as part of OAAM Admin.

Kindly watch the below video for better understanding of OAAM checkpoints & this topic is divided into 3 parts that are;


Hope this helps :-)

Enjoy :-)

OAAM 11gR2PS3 Checkpoints - How are they executed

How Checkpoints are executed in OAAM

Checkpoints in OAAM are like barriers that one need to cross to move ahead, and once you are been verified fully only than you are allowed to pass.

Basically in OAAM we have different kind of checkpoints available that helps the system to know about the user in a better way.

I have explained the entire flow of these checkpoints in below video, kindly watch & share your comments.



Hope this helps :-)

Enjoy :-)

OAAM 11gR2PS3 Conditions Types

OAAM Conditions Types

With each condition defined in OAAM, there is a type associated with it. These types define the behavior of a condition. And based on this we ca actually use a condition with a rule.

Condition Types are explained in below video, kindly watch & please share your comments.



Hope this helps :-)

Enjoy :-)

OAAM 11gR2PS3 Conditions & Conditions Types

OAAM Conditions

OAAM provides pre-packaged conditions that are to be used while defining rules. It's the very important part of any policy i.e. defined in OAAM.

Conditions can't be created by an admin, they are fixed & can only be modified only in terms of the output we want from that condition.

I have explained what conditions are in below shared video, watch & please share your comments.



Hope this helps :-)

Enjoy :-)


Saturday, 19 January 2019

Understanding Blue Green Deployment

What this blue-green all about?

This is a way of switching traffic from one deployment to another one. That means say you have a new version of software to roll out, which has been successfully tested in staging environment. Now you want that to go-live, so here in kubernetes you have this magic word blue-green deployment.

Definition: "A blue green deployment uses the service label selector to switch all traffic from one deployment to another."

If above stated definition is cryptic, than lets see an image view to understand it;



Here if you notice, we have app:hello version:1.0.0 is currently deployed. Now let's say we have a new version i.e. 2.0.0 of hello app;


First test you new deployment, i.e. version 2.0.0. Once you have verified it, it's time to switch live traffic to version 2.0.0 deployment.

Now let's get back to the definition, we discussed previously. It says use service label selector to switch traffic. Awesome we are on track understanding traffic switch.


Here we have successfully switched to new version 2.0.0 with help of "selectors & labels". That means if we want to get all this done, the mantra is to understand "labels & selectors".

Got it, now you must be asking where the heck is this labels & selectors discussion. Need not to worry we will soon see a video tutorial because that needs much attention.

Very well, we have made a basic understanding of blue-green deployment.

Enjoy :-)

Setting up NGINX Plus openid connect with IDCS on Google Cloud





This tutorial describes the nginx plus openid connect feature working with IDCS (also known as OICS) on google cloud platform.

It features the following:
a) Creating NGINX Plus VM Instance on google cloud.
b) Setting up nginx plus openid connect environment.
c) Configuring a client on IDCS.
d) Executing nginx plus configure script.
e) Execute 3 legged flow & Identify the bug with NGINX Plus.
f) Workaround for the identified bug.
g) Working Demo


Issues identified link updated:

  • https://github.com/nginxinc/nginx-openid-connect/issues 

Enjoy :-)


Purposed Model of Continuous Integration & Continuous Delivery & Deployment

Continuous Integration & Delivery: From Dev Team Perspective

  •   Step1: Developer starts working on a code fix/enhancement.




  1. Developer commits code to development branch
  2. Build process get kicked off along with unit tests are executed.
  3. Result of Step 2 is a docker image.
  4. Container image gets uploaded to container registry such as GCR (google cloud registry).
  5. This latest image needs to be deployed on Dev env.. This can be done with Kubernetes engine by following:
    1. Manually - Update the pod configuration.yaml file with the latest docker version.This will create a new POD with latest image.
    2. Automation - Write a serverless function which will have a cronjob polling the container registry to check for latest image. If found will update the pod config & result will be a new POD with latest image.
  6. Perform tests on dev env. deployed with latest image.
    1. Here integration tests can be triggered manually or by automated way (using jenkins/spiannker).
    2. As well as perform manual tests

Step 2 - Developer find an issue while testing the code fix (performed in Step 1)




  1. Developer finds an issue while testing the image generated in Step 1.
    1. Might be the integration tests got failed. Or,
    2. Issues with image deployment. Or,
    3. Issue caught while manual testing.
    4. Etc.
  2. Fix the code again, & commits code in dev branch.
  3. Build gets triggered, unit tests are performed. And a new image gets generated.
  4. This image gets uploaded on container registry.
  5. New image having code fix needs to be deployed in dev. env.
  6. Developer retest the code fixes, 

Step 3 - Testing Completed, now merge the changes in master branch

  1. Now its time to commit the code in master branch. As all tests are passed with recent fix made.
  2. Same steps will be followed as described above.
  3. Just one change will be here that the container registry will now have a public release.
    1. Initially it was for testing purpose & scope of that image was internal use only.
    2. Now as the changes are finalized, it has to be available for public use.
    3. Public use may or may not be be restricted as per the management decision.

Continuous Deployment:

  • With continuous deployment - comes continuous challenges of;
    • How an update is rolled out?
    • Does this update needs to be rolled out completely or partially? This brings the concept of Canary Deployment.
    • How to switch the traffic from old version to new version? This will bring in the Blue-Green Deployment.
  • Below is the basic possible deployment flowchart, briefly describing how the update rollout happens?


  1. 5(a) Container image is now ready to be deployed to canary deployment.
  2. Container image promoted to canary.
  3. Once a set of users verify that latest deployment on canary is working fine, it needs to be deployed on production.
  4. Container image promoted to production.
Further Info:
References:

Enjoy :-)

Understanding Updates Rollout in Continuous Deployment

As we discussed in the << Purposed Model of continuous integration & deployment >> post about the process that how a developer performs code change/fix, how it gets propagated to the pipeline, how this become part of delivery & deployment.
In continuation to that there arises a point of how an update is rolled out, what are the possible ways to do that & how that can be benefited?

Let's start the journey with a possible deployment architecture explaining that how an update is rolled out:

Note: Here in this deployment example we will consider a replica set of 3 identical pods having same image i.e. "hello1".
  • An update of new image is available from the container registry, this needs to be rolled out in the deployment.

  • Now we have a new updated image say "hello2". In this case we will tell our kubernetes master to create a second replica set that will have containers with image "hello2".


  • You will notice that with creation of 2nd replica set the service pointing to replica set (1) will gradually start pointing to replica set (2) pods.


  • First replica set pods will start decreasing & second replica set pods will increase.



Note: At most in this deployment we will have 4 pods at a time & at least 3 pods.




  • Finally you will observe all 3 pods of replica set (2) are created & you are left with last POD of replica set (1) that too will be vanished soon.

  • Finally the new image version is rolled out



References:

Enjoy :-)

Details about Canary Deployment

About Canary Deployment:

  • Definition: Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers
  • This means when i have to deploy an update in production, i can do this by canary way. This will allow me to deploy the change in production but only to a subset of servers. So by this way we subset of users can test the new update & report if any issue occurs. If all goes well than the update can be rolled out completely.




How to switch the traffic to from old version to new version?

This can be made possible with blue-green deployment. More info Understanding blue-green deployment.

References:
Enjoy :-)

Friday, 18 January 2019

Error: The authentication scheme protecting the resource sets 'Secure' OAMAuthnCookie/ObSSOCookie, but the resource is not being accessed via secure http

Error Statement:

If the authentication scheme is configured to set "Secure" OAMAuthnCookie/ObSSOCookie and the user is accessing an insecure resource, the browser may enter an authentication browser loop. Show an error i.e.:

"The authentication scheme protecting the resource sets 'Secure' OAMAuthnCookie/ObSSOCookie, but the resource is not being accessed via secure http."


Workaround:

In authentication scheme, remove the following parameter & save the changes;

Syntax for 11g Webgate and OAMAuthnCookieSyntax for 10g Webgate and ObSSOCookie
ssoCookie=Secure
ssoCookie:Secure

Make sure changes are applied properly, as in the policy sync-up at OAM server happens successfully. 
You may restart the server instance (ohs/apache/iis etc) or you can wait for webgate cache clean. Try accessing the protected resource once again, you should be prompted for login.

Resolution:

Recheck you SSL settings at WebServer end.

References:



Enjoy :-)



Wednesday, 5 July 2017

Increase docker pool size by changing storage driver

Configure Docker with the devicemapper storage driver

NOTE: This is for Docker CE & Docker EE

Issue:

By default you will notice that docker storage device is "brtfs" i.e. default storage, which is limited to 20GB of data storage.

default storage driver

Usually you will find this storage as too limited to use. As most of the times we have to install multiple images that too of high storage like 4/8/10GB. And with this default storage you will be end up getting frustrated.

Solution:

Configure Docker with "devicemapper" storage driver.

How to do this:

1) Stop Docker
$ sudo systemctl stop docker 

2) Edit /etc/docker/daemon.json. If it does not yet exist, create it. Assuming that the file was empty, add the following contents.
{
  "storage-driver": "devicemapper"
}
Note: Docker will not start if the daemon.json file contains badly-formed JSON

3) Start Docker
$ sudo systemctl start docker

4) Verify that the daemon is using the devicemapper storage driver. Use the docker info command and look for Storage Driver
devicemapper storage driver

Note: This host is running in loop-lvm node, which is not supported on production systems. This is indicated by the fact that the Data loop file and a Metadata loop file are on files under /var/lib/docker/devicemapper/devicemapper.


Hope this helps :-)
Enjoy :-)

Monday, 3 July 2017

Enable SSL in between OHS & Outbound Applications

Enabling SSL in between OHS & OutBound Applications

Prerequisites:
  1. OHS SSL is enabled.
  2. Outbound App SSL is enabled like OAM, Weblogic, OIM etc.
What we are aiming is to setup SSL in b/w OHS & outbound apps

Eg: Consider you want to proxy your OAM server via OHS as a load balancer/proxy call it any. This is a very normal usecase where you have your OAM servers sitting in your data-center & you don't want it's hostname/IP to be exposed. So what you usually do is proxy OAM via OHS.
  • Consider your OHS server name is https://abc.com. So if admin needs to access the oamconsole. Admin will fire the url as https://abc.com/oamconsole
  • To enable this usecase, /oamconsole is to be added in ssl.conf/mod_wl_ohs.conf file(usual way).
  • But the catch is that our OHS & OAM are in SSL mode.
  • This means that they will do handshake before starting to talk to each other.
  • As we all know that while doing handshake, server sends its user certificate, now this cert is verified by client i.e. here mod_wl_proxy of OHS. So the wallet used by it has to have the trusted certificate entry in it.

Steps you need to follow for this are as;

  • Import the certificate used by Outbound app such as Oracle WebLogic Server into the Oracle HTTP Server wallet as a trusted certificate.
    • To add trusted certificate you can use orapki utility or any of your choice.
    • <MW_HOME>/oracle_common/bin/orapki wallet add -wallet ./ -trusted_cert -cert cacert.pem -auto_login_only 
    • Note: './' is used as we consider that you are running this command from the directory where your cwallet.sso is present. You can substitute it with directory path of cwallet.sso as well.
  •  Now you need to add 2 tags in ssl.conf or mod_wl_proxy.conf:
    • SecureProxy On
      WlSSLWallet "<wallet location>" 

Complete Eg:

<Location /console>
SetHandler weblogic-handler
WebLogicHost xyz.us.domain.com
WebLogicPort 7001
SecureProxy ON
WlSSLWallet "/MW_HOME/keystores/newwallet"
</Location>


Now start your OHS server, and try to access the proxied url, you should be able to make a successful connection. You can also confirm the same by capturing wireshark traces. 

Hope this helps... :-)

Useful links:

Enjoy :-)