December 18, 2014

Become your own Certificate Authority (CA) - fedora,centos,redhat based linux

Below steps could be used to setup yourself as your own Certificate Authority(CA) and sign SSL certificates for your hosts

openssl config

November 26, 2014

Using tshark to decrypt SSL trafic

  • Get private key of the SSL server in PEM/PKCS12 format ( if conversion is required see link below ) and save only key in to a file
  • tcpdump/snoop capture file to be decrypted
  • check tshark default preferences relating to SSL


  • run tshark with ssl.keys_list parameter,as below to read SSL decrypted data
  •  ssl.keys_list variable  has 4 values: x.x.x.x (IP), port,upper layer protocol, private RSA key filename

Info on key conversions and more at http://wiki.wireshark.org/SSL

November 25, 2014

sssd - new learnings


Quoted text from this link, explaining how sssd caches password info

One of the reasons people used to
use the shadow map was to expose the encrypted password so that cached
passwords were available for all users.

Our mechanism for caching passwords is different. We don't acquire the
user's password from LDAP and then authenticate locally. Instead, we
communicate with LDAP or Kerberos and ask it whether the provided
password authenticates correctly. If it does, we hash the password
locally and then it can be used for offline authentication when the
authentication server is unreachable.

So with SSSD, cached passwords only work for users that have logged in
at least once previously. This significantly reduces the vulnerability
to offline dictionary attacks on arbitrary users. (Which was a serious
problem with shadow map passwords).

November 18, 2014

TCP listen queue full

Some clients started to have TCP connection issues and it ended up due to TCP listen queue full on server side. Server was written in perl and running on AIX

Troubleshooting the issue with tcpdump, I could see TCP SYN packets received on server side from client, but server is not responding - that led me to look at TCP statistics and  'netstat -s' did show below

$netstat -s|grep 'queue'
                933122 discarded due to listener's queue full

This counter was increasing over time. Looking at the perl code, listen backlog was set to 100.

$socket = new IO::Socket::INET (
LocalHost => $listen,
LocalPort => $port,
Proto => 'tcp',
Listen => 100,
Reuse => 1

However, more clients were added to this application recently and were receving more than 100 connections at a time - which was causing packets to be droped. Simple fix was to increase listen backlog parameter in perl code and restart server

At OS layer the backlog is set via 'somaxconn' parameter. You could use below command to check the same on AIX

$no -o somaxconn
somaxconn = 16384



November 14, 2014

huge lastlog, faillog and tallylog on linux ?

If you have huge UID numbers in your name space, you will see some huge files in your /var/log area on Redhat.

$ id
uid=994295558(venkat) gid=50000(admin)context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

$ ls -ls /var/log|awk ' $1*4096 < $6'
8 -rw-r--r--. 1 root root 290334301184 Nov 14 13:32 lastlog
24 -rw-------. 1 root root 63634915328 Nov 14 11:41 tallylog



These are sparse files, not taking up disk space ( see first column of above ls output). This behavior is due to the (old) way of indexing used by lastlog formatting. But could cause issues for backup applications - which are not sparse file aware. Backups might take forever to complete and take up 100% cpu while seeking these huge files


There is no fix in either RHEL6 or 7, other than fixing larger UID's :(

For the curios, here with bugzilla info rhel6 fedora




November 4, 2014

NFSv4 idmap - in-kernel keyring issues

Recently I encountered with a NFS issue, where some files on NFS mounts displayed with UID/GID of value 4294967294

It was on a RHEL 6.5 client and further investigation lead us to bug 1033708


From RHEL 6.3 onwards, Redhat has dropped rpc.idmapd daemon and instead uses in-kernel keyring for NFSv4 ID mapping. /usr/sbin/nfsidmap program is called for lookups and is configured via  /etc/request-key.d/id_resolver.conf


$ cat /etc/request-key.d/id_resolver.conf

#
# nfsidmap(5) - The NFS idmapper upcall program
# Summary: Used by NFSv4 to map user/group ids into
#          user/group names and names into in ids
# Options:
# -v         Increases the verbosity of the output to syslog
# -t timeout Set the expiration timer, in seconds, on the key
#
create    id_resolver    *         *    /usr/sbin/nfsidmap %k %d

default values for the keyring are very small (200) - so, if your environment has to map more than 200 NFS uid's - you hit the bug and the code just returns -2, which translates to 4294967294

To fix the issue, you need to update your nfs-utils,nfs-utils-lib rpm's and update kernel tunables (shown below) for key-ring values

kernel.keys.maxkeys = 65536
kernel.keys.maxbytes = 4194304
kernel.keys.root_maxkeys = 65536
kernel.keys.root_maxbytes = 4194304

usage of key-ring could be seen via /proc filesystem

$ cat /proc/keys
141e035e I--Q--     6 perm 1f3f0000 994295551 50000 keyring   _ses: 1/4
180127d1 I--Q--     4 perm 1f3f0000 994295551    -1 keyring   _uid.994295551: empty
21a48ca1 I--Q--     2 perm 1f3f0000 994295551 50000 keyring   _ses: 1/4
3765083a I--Q--     1 perm 1f3f0000 994295551    -1 keyring   _uid_ses.994295551: 1/4

$ cat /proc/key-users
    0:    13 12/12 9/65536 259/4194304
12341:     3 3/3 3/65536 83/4194304
994295551:     4 4/4 4/65536 152/4194304

Here I have setup 64k keys

September 25, 2014

ssh tunnel (port forwarding) through multiple hosts

Sometimes you may need to connect to a service/port which is behind multiple DMZ hosts from your windows/linux desktop, where you just have ssh access. SSH tunneling to the rescue, you could configure tunnels across putty session - all the way to your end host

Here with an example:
In this scenario, I have used Xserver service as an example to demonstrate. Goal is to be able to connect to my Xserver running on windows desktop from a linux host, which is behind two DMZ/firewall hosts.

See below diagram to understand the flow of traffic
Find below putty configuration on windows desktop

As shown above, after adding tunnel configuration into putty ssh->tunnels settings tab, click add and try to ssh into GW1 host.

Once on GW1 host: run "ssh -R 6002:127.0.0.1:6001 user@GW2"
Once on GW2 host: run "ssh -R 6003:127.0.0.1:6002 user@destination"

Now you are logged onto your destination host via ssh with all tunnels setup, so local port 6003 on destination host  => gets forwarded to port 6000 on your windows host.

so, simply setting DISPLAY variable to point at localhost:3.0 would make xterm go live! similarly you can do port forwarding in reverse direction with " ssh -L"

September 15, 2014

Linux NFS mount time

To find mount time for any NFS mount on linux, you need to parse age value (in seconds) from /proc/self/mountstats

Below is an example entry from /proc/self/mountstats
.
.
device homedir_srv1:/volumes/home/vc/ mounted on /home1/vc with fstype nfs4 statvers=1.1
        opts:   rw,vers=4,rsize=8192,wsize=8192,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.12
6.32.244,minorversion=0,local_lock=none
        age:    12461348
        caps:   caps=0xffff,wtmult=512,dtsize=8192,bsize=0,namlen=255
        nfsv4:  bm0=0xfdffbfff,bm1=0xf9be3e,acl=0x3
        sec:    flavor=1,pseudoflavor=1
.
.

 Handy perl one liner, to parse/display shows all NFS mounts with their mount-time 

perl -ne 'if (/fstype nfs/) {$age=1;print ((split)[4]." ")} ; if ($age && /age/) {s/age:\s+//; print scalar localtime(time-$_)."\n";}' /proc/self/mountstats

/home1/ben Thu Apr 24 11:19:39 2014
/home1/vc  Mon Sep 15 16:37:12 2014