Raspberry PI 2 - Kali Linux

on Friday, March 13, 2015

This is my first Raspberry PI yes the Raspberry PI 2 , the pi has a lot of use cases and hobbies people work on but however for me it's more relevant to use it as a 'throw-away hackboxes' (a phrase I picked from Offensive Security) I do it for educational purposes.

Below I shall let you know what you need to do in an orderly fashion to setup kali in the new Raspberry pi and ssh through an OSX machine. Please note the credit goes to the original authors and their articles here, again my intention is to summarise all the various steps together at one place.


1. download the latest Kali RPi image (1.1.xx), extract it, and dd it to the SD card (incase you don't know the device it's loaded to, do a diskutil list

2. Next you need to unmount your SD card by issuing diskutil unmountDisk /dev/disk1 assuming our microSD is on disk1 (make sure you have the right disk number from the previous step)

3. Format the SD card by using the command sudo newfs_msdos -F 16 / dev/disk1 command (again make sure you use the right disk number)

4.Use the command sudo dd if=~/Downloads/kali-1.xx.xx-rpi.img of=/dev/disk1 (take care of the disk number again)


5. Insert the MicroSD card into the PI, allow the system to boot

6. Use standard login and password for Kali, type startx command at the shell prompt to start up the XFCE desktop environment.

7. Update & Upgrade apt-get update & apt-get upgrade respectively

8. change your SSH host keys as soon as possible as all ARM images are pre-configured the same keys and also change your root password

root@kali:~ rm /etc/ssh/ssh_host_*
root@kali:~ dpkg-reconfigure openssh-serverroot@kali:~ service ssh restart

9. Setup tightvncserver

apt-get install tightvncserver

10. tightvncserver 

11. Paste the content. Change resolution if necessary.
# Provides: tightvncserver
# Required-Start:
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start vnc server
# Description:

case "$1" in
su root -c 'vncserver :1 -geometry 1024x768 -depth 16 -pixelformat rgb565:'
echo "VNC Started"
pkill Xtightvnc
echo "VNC Terminated"
echo "Usage: /etc/init.d/tightvnc {start|stop}"
exit 1

12. Let the file to be executable

chmod 755 /etc/init.d/tightvnc

13.Refresh global service configuration

update-rc.d tightvnc defaults

14. Restart system

15. By default Kali version will not have the metapackages install them 

apt-get update && apt-cache search KALI-LINUX-TOP10 

I went for Kali linux top10 as I have a space restriction. 

Note: Gui doesn't start automatically but I SSH and then issue startx command to start the xfce desktop. I'm in OSX and currently using Chicken of the VNC, I need to move to native OSX.

Hash Functions

on Sunday, February 01, 2015

I found a clear definition of the hash functions thought I should bring it here on my blog.

Hash Functions

They provide a mapping between an arbitrary length input, and a (usually) fixed length (or smaller length) output. It can be anything from a simple crc32, to a full blown cryptographic hash function such as MD5 or SHA1/2/256/512. The point is that there's a one-way mapping going on. It's always a many:1 mapping (meaning there will always be collisions) since every function produces a smaller output than it's capable of inputting (If you feed every possible 1mb file into MD5, you'll get a ton of collisions).
The reason they are hard (or impossible in practicality) to reverse is because of how they work internally. Most cryptographic hash functions iterate over the input set many times to produce the output. So if we look at each fixed length chunk of input (which is algorithm dependent), the hash function will call that the current state. It will then iterate over the state and change it to a new one and use that as feedback into itself (MD5 does this 64 times for each 512bit chunk of data). It then somehow combines the resultant states from all these iterations back together to form the resultant hash.
Now, if you wanted to decode the hash, you'd first need to figure out how to split the given hash into its iterated states (1 possibility for inputs smaller than the size of a chunk of data, many for larger inputs). Then you'd need to reverse the iteration for each state. Now, to explain why this is VERY hard, imagine trying to deduce a and b from the following formula: 10 = a + b. There are 10 positive combinations of a and b that can work. Now loop over that a bunch of times: tmp = a + b; a = b; b = tmp. For 64 iterations, you'd have over 10^64 possibilities to try. And that's just a simple addition where some state is preserved from iteration to iteration. Real hash functions do a lot more than 1 operation (MD5 does about 15 operations on 4 state variables). And since the next iteration depends on the state of the previous and the previous is destroyed in creating the current state, it's all but impossible to determine the input state that led to a given output state (for each iteration no less). Combine that, with the large number of possibilities involved, and decoding even an MD5 will take a near infinite (but not infinite) amount of resources. So many resources that it's actually significantly cheaper to brute-force the hash if you have an idea of the size of the input (for smaller inputs) than it is to even try to decode the hash.

Encryption Functions

They provide a 1:1 mapping between an arbitrary length input and and output. And they are always reversible. The important thing to note is that it's reversible using some method. And it's always 1:1 for a given key. Now, there are multiple input:key pairs that might generate the same output (in fact there usually are, depending on the encryption function). Good encrypted data is indistinguishable from random noise. This is different from a good hash output which is always of a consistent format.

Use Cases

Use a hash function when you want to compare a value but can't store the plain representation (for any number of reasons). Passwords should fit this use-case very well since you don't want to store them plain-text for security reasons (and shouldn't). But what if you wanted to check a filesystem for pirated music files? It would be impractical to store 3 mb per music file. So instead, take the hash of the file, and store that (md5 would store 16 bytes instead of 3mb). That way, you just hash each file and compare to the stored database of hashes (This isn't practical because of re-encoding, changing file headers, etc, but it's an example use-case).
Use a hash function when you're checking validity of input data. That's what they are designed for. If you have 2 pieces of input, and want to check to see if they are the same, run it through a hash function. The probability of a collision is astronomical for small input sizes (assuming a good hash function). That's why it's recommended for passwords. For passwords up to 32 characters, md5 has 4 times the output space. Sha1 has 6 times the output space (about). Sha512 has about 16 times the output space. You don't really care what the password was, you care if it's the same as the one that was stored. That's why you should use hashes for passwords.
Use encryption whenever you need to get the input data back out. Notice the word need. If you're storing credit card numbers, you need to get them back out at some point, but don't want to store them plain text. So instead, store the encrypted version and keep the key as safe as possible.
Hash functions are also great for signing data. For example, if you're using HMAC, you sign a piece of data by taking a hash of the data concatenated with a known but not transmitted value (a secret value). So you send the plain-text and the hmac hash. Then, the receiver simply hashes the submitted data with the known value and checks to see if it matches the transmitted hmac. If it's the same, you know it wasn't tampered with by a party without the secret value. This is commonly used in secure cookie systems by HTTP frameworks, as well as in message transmission of data over HTTP where you want some validity to the data.

A note on hashes for passwords:

A key feature of cryptographic hash functions is that they should be very fast to create, and verydifficult/slow to reverse (so much so that it's practically impossible). This poses a problem with passwords. If you store sha512(password), you're not doing a thing to guard against rainbow tables or brute force attacks. Remember, the hash function was designed for speed. So it's trivial for an attacker to just run a dictionary through the hash function and test each result.
Adding a salt helps matters since it adds a bit of unknown data to the hash. So instead of finding anything that matches md5(foo), they need to find something that when added to the known salt produces md5(foo.salt) (which is very much harder to do). But it still doesn't solve the speed problem since if they know the salt it's just a matter of running the dictionary through.
So, there are ways of dealing with this. One popular method is called key strengthening (or key stretching). Basically, you iterate over a hash many times (thousands usually). This does two things. First, it slows down the runtime of the hashing algorithm significantly. Second, if implemented right (passing the input and salt back in on each iteration) actually increases the entropy (available space) for the output, reducing the chances of collisions. A trivial implementation is:
var hash = password + salt;
for (var i = 0; i < 5000; i++) {
    hash = sha512(hash + password + salt);
There are other, more standard implementations such as PBKDF2BCrypt. But this technique is used by quite a few security related systems (such as PGP, WPA, Apache and OpenSSL).
The bottom line, hash(password) is not good enough. hash(password + salt) is better, but still not good enough... Use a stretched hash mechanism to produce your password hashes...

Another note on trivial stretching

Do not under any circumstances feed the output of one hash directly back into the hash function:
hash = sha512(password + salt); 
for (i = 0; i < 1000; i++) {
    hash = sha512(hash); // <-- code="" do="" not="" this="">
The reason for this has to do with collisions. Remember that all hash functions have collisions because the possible output space (the number of possible outputs) is smaller than then input space. To see why, let's look at what happens. To preface this, let's make the assumption that there's a 0.001% chance of collision from sha1() (it's much lower in reality, but for demonstration purposes).
hash1 = sha1(password + salt);
Now, hash1 has a probability of collision of 0.001%. But when we do the next hash2 = sha1(hash1);,all collisions of hash1 automatically become collisions of hash2. So now, we have hash1's rate at 0.001%, and the 2nd sha1 call adds to that. So now, hash2 has a probability of collision of 0.002%. That's twice as many chances! Each iteration will add another 0.001% chance of collision to the result. So, with 1000 iterations, the chance of collision jumped from a trivial 0.001% to 1%. Now, the degradation is linear, and the real probabilities are far smaller, but the effect is the same (an estimation of the chance of a single collision with md5 is about 1/(2^128) or 1/3e38. While that seems small, thanks to the birthday attack it's not really as small as it seems).
Instead, by re-appending the salt and password each time, you're re-introducing data back into the hash function. So any collisions of any particular round are no longer collisions of the next round. So:
hash = sha512(password + salt);
for (i = 0; i < 1000; i++) {
    hash = sha512(hash + password + salt);
Has the same chance of collision as the native sha512 function. Which is what you want. Use that instead

Ping Sweep in Command Prompt

on Sunday, January 25, 2015

Ping Sweep on Windows cmd prompt

The following is a simple For loop which executes the ping command incrementally '%i' is the counter, starts at 1 incremented by 1 stops at 255. Add -w 100 to make it wait for 100 milliseconds 

C:\> FOR /L %i in (1,1,255) do @ping -n 1 10.10.10.%i  -w 100 | find "Reply" /i

Ping sweeps can by avoided by configuring Cisco ACL. 

Courtesy : http://en.wikiversity.org/wiki/Ping/Sweep

Theory of Constraints and PM

on Friday, February 07, 2014
"A chain is only as strong as its weakest link" remember the old adage ? 

Theory of Constraints (TOC) is a method created by Dr Eli Goldratt and was published in his 1984 book "The Goal." I'll not go in detail here but just simply put, in every organization no matter how good the performance it has at least one constraint that limits it and this constraint is the organization's weakest link. This article focuses on how a project manager can be effective to navigate through these blockers and eventually conclude a project successfully.

Some of the constraints affecting an organization could be - Resource availability, Regulatory change, Lack of Skills, organization's policy to work with offshore teams, ineffective vendor management etc.

As a Project Manager the key is to manage constraints throughout it's life-cycle, the life-cycle phases would be 1. identify these constraints that directly/indirectly impact the project   2. Exploit the constraint (fix/limit the effect on the project) 3. Sub-ordinate everything else. 4. Evaluate the constraint.

Some constraints can be outside of a Project Manager's control, however PM will play a vital role in reporting these constraints through it's life cycle to the management team.

Lightweight Distros

on Monday, July 22, 2013

Review of lightweight Linux distros

Over the weekend I reviewed the following Linux lightweight distros:

Lubuntu, Manjaro, Madbox (all from the respective stable build repos)

My requirements were quite unique, wanted to setup Linux pc mainly to enable my 50inch Panasonic Viera LED TV 1080p to go Smart! needs to be quick, should have a bare minimum software pre-loaded, should be based on Ubuntu with community support, important when we look at Open-source software.

Well... finally I unleashed a mean, quick Smart TV thanks to Madbox Linux which is based on Ubuntu.

The PC is very old P4 age, with 1gb ram, NVidia video card with 128MB almost redundant with VGA output to TV's HDMI works okay.

Getting the resolution right was quite a painful experience with Manjaro or Lubuntu with the latter 13.04 version which used LXDE desktop.Though I chose the right drivers for Nvidia but still was not enjoyable it was relatively slow to load.

The problem was not the drivers it helped to output to what it thought the right resolution for a 1080p screen but my couch is quite far from the TV at least 13-14ft I just couldn't read from that distance (works well for YouTube or movies though), so my goal was to lower the resolution and for that add a specific resolution of 1280x720 to the settings.

With Manjaro I went with XFCE edition getting the screen resolution right was even more painful, Manjaro is based on Arch Linux which is a lightweight build. Tried https://wiki.archlinux.org/index.php/Xrandr but just could nail it to load when the system boots. Finally loaded ARandR which is great little tool by this man http://christian.amsuess.com/tools/arandr/ worked wonders.

I gave up with Manjaro when I lost sound all I did was I tried to control volume on the remote keyboard!

We all know Linux is not for the impatient.

Finally Madbox Linux, it's unbelievably quick, it's lightweight openbox desktop is the thing you need for a OldPC with TV! with ARandR the resolution is all sorted now, it's loaded with Chrome(ium) which is good but you can load Firefox.

Would recommend this distro more specifically for the ease of use and openbox, have the advantages of Ubuntu.

Five core risk areas common to all projects

on Tuesday, August 28, 2012

Tom DeMarco and Tim Lister identified five core risk areas common to all projects in their book, Waltzing with Bears:

  • Intrinsic Schedule Flaw (estimates that are wrong and undoable from day one, often based on wishful thinking)
  • Specification Breakdown (failure to achieve stakeholder consensus on what to build)
  • Scope Creep (additional requirements that inflate the initially accepted set)
  • Personnel Loss
  • Productivity Variation (difference between assumed and actual performance)