November 26, 2014

Using tshark to decrypt SSL trafic

  • Get private key of the SSL server in PEM/PKCS12 format ( if conversion is required see link below ) and save only key in to a file
  • tcpdump/snoop capture file to be decrypted
  • check tshark default preferences relating to SSL


  • run tshark with ssl.keys_list parameter,as below to read SSL decrypted data
  •  ssl.keys_list variable  has 4 values: x.x.x.x (IP), port,upper layer protocol, private RSA key filename

Info on key conversions and more at http://wiki.wireshark.org/SSL

November 25, 2014

sssd - new learnings


Quoted text from this link, explaining how sssd caches password info

One of the reasons people used to
use the shadow map was to expose the encrypted password so that cached
passwords were available for all users.

Our mechanism for caching passwords is different. We don't acquire the
user's password from LDAP and then authenticate locally. Instead, we
communicate with LDAP or Kerberos and ask it whether the provided
password authenticates correctly. If it does, we hash the password
locally and then it can be used for offline authentication when the
authentication server is unreachable.

So with SSSD, cached passwords only work for users that have logged in
at least once previously. This significantly reduces the vulnerability
to offline dictionary attacks on arbitrary users. (Which was a serious
problem with shadow map passwords).

November 18, 2014

TCP listen queue full

Some clients started to have TCP connection issues and it ended up due to TCP listen queue full on server side. Server was written in perl and running on AIX

Troubleshooting the issue with tcpdump, I could see TCP SYN packets received on server side from client, but server is not responding - that led me to look at TCP statistics and  'netstat -s' did show below

$netstat -s|grep 'queue'
                933122 discarded due to listener's queue full

This counter was increasing over time. Looking at the perl code, listen backlog was set to 100.

$socket = new IO::Socket::INET (
LocalHost => $listen,
LocalPort => $port,
Proto => 'tcp',
Listen => 100,
Reuse => 1

However, more clients were added to this application recently and were receving more than 100 connections at a time - which was causing packets to be droped. Simple fix was to increase listen backlog parameter in perl code and restart server

At OS layer the backlog is set via 'somaxconn' parameter. You could use below command to check the same on AIX

$no -o somaxconn
somaxconn = 16384



November 14, 2014

huge lastlog, faillog and tallylog on linux ?

If you have huge UID numbers in your name space, you will see some huge files in your /var/log area on Redhat.

$ id
uid=994295558(venkat) gid=50000(admin)context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

$ ls -ls /var/log|awk ' $1*4096 < $6'
8 -rw-r--r--. 1 root root 290334301184 Nov 14 13:32 lastlog
24 -rw-------. 1 root root 63634915328 Nov 14 11:41 tallylog



These are sparse files, not taking up disk space ( see first column of above ls output). This behavior is due to the (old) way of indexing used by lastlog formatting. But could cause issues for backup applications - which are not sparse file aware. Backups might take forever to complete and take up 100% cpu while seeking these huge files


There is no fix in either RHEL6 or 7, other than fixing larger UID's :(

For the curios, here with bugzilla info rhel6 fedora




November 4, 2014

NFSv4 idmap - in-kernel keyring issues

Recently I encountered with a NFS issue, where some files on NFS mounts displayed with UID/GID of value 4294967294

It was on a RHEL 6.5 client and further investigation lead us to bug 1033708


From RHEL 6.3 onwards, Redhat has dropped rpc.idmapd daemon and instead uses in-kernel keyring for NFSv4 ID mapping. /usr/sbin/nfsidmap program is called for lookups and is configured via  /etc/request-key.d/id_resolver.conf


$ cat /etc/request-key.d/id_resolver.conf

#
# nfsidmap(5) - The NFS idmapper upcall program
# Summary: Used by NFSv4 to map user/group ids into
#          user/group names and names into in ids
# Options:
# -v         Increases the verbosity of the output to syslog
# -t timeout Set the expiration timer, in seconds, on the key
#
create    id_resolver    *         *    /usr/sbin/nfsidmap %k %d

default values for the keyring are very small (200) - so, if your environment has to map more than 200 NFS uid's - you hit the bug and the code just returns -2, which translates to 4294967294

To fix the issue, you need to update your nfs-utils,nfs-utils-lib rpm's and update kernel tunables (shown below) for key-ring values

kernel.keys.maxkeys = 65536
kernel.keys.maxbytes = 4194304
kernel.keys.root_maxkeys = 65536
kernel.keys.root_maxbytes = 4194304

usage of key-ring could be seen via /proc filesystem

$ cat /proc/keys
141e035e I--Q--     6 perm 1f3f0000 994295551 50000 keyring   _ses: 1/4
180127d1 I--Q--     4 perm 1f3f0000 994295551    -1 keyring   _uid.994295551: empty
21a48ca1 I--Q--     2 perm 1f3f0000 994295551 50000 keyring   _ses: 1/4
3765083a I--Q--     1 perm 1f3f0000 994295551    -1 keyring   _uid_ses.994295551: 1/4

$ cat /proc/key-users
    0:    13 12/12 9/65536 259/4194304
12341:     3 3/3 3/65536 83/4194304
994295551:     4 4/4 4/65536 152/4194304

Here I have setup 64k keys