http://www.itworld.com/operating-systems/317369/setting-limits-ulimit?page=0,0
Administering Unix servers can be a challenge, especially when the systems you manage are heavily used and performance problems reduce availability. Fortunately, you can put limits on certain resources to help ensure that the most important processes on your servers can keep running and competing processes don't consume far more resources than is good for the overall system. The ulimit command can keep disaster at bay, but you need to anticipate where limits will make sense and where they will cause problems.
It may not happen all that often, but a single user who starts too many processes can make a system unusable for everyone else. A fork bomb -- a denial of service attack in which a process continually replicates itself until available resources are depleted -- is a worst case of this. However, even friendly users can use more resources than is good for a system -- often without intending to. At the same time, legitimate processes can sometimes fail when they are run against limits that are designed for average users. In this case, you need to make sure that these processes get beefed up allocations of system resources that will allow them to run properly without making the same resources available for everyone.
To see the limits associate with your login, use the command ulimit -a. If you're using a regular user account, you will likely see something like this:
One thing you might notice right off the bat is that you can't create
core dumps -- because your max core file size is 0. Yes, that means
nothing, no data, no core dump. If a process that you are running
aborts, no core file is going to be dropped into your home
directory. As long as the core file size is set to zero, core dumps are
not allowed.
This makes sense for most users since they probably wouldn't do anything more with a core dump other than erase it, but if you need a core dump to debug problems you are running into with an application, you might want to set your core file size to unlimited -- and maybe you can.
If, on the other hand, you are managing a server and don't want any of your users able to generate core dumps regardless of how much they'd like to sink their teeth into one, you can set a limit of 0 in your limits.conf.
Another limit that is often enforced is
one that limits the number of processes that an individual can run. The
ulimit option used for this is -u. You can look at your limit as we did above with the ulimit -a command or show just the "nproc" limit with the command ulimit -u.
You will note that ulimit is a bash built-in -- at least on Linux -- and that the following options are available:
Administering Unix servers can be a challenge, especially when the systems you manage are heavily used and performance problems reduce availability. Fortunately, you can put limits on certain resources to help ensure that the most important processes on your servers can keep running and competing processes don't consume far more resources than is good for the overall system. The ulimit command can keep disaster at bay, but you need to anticipate where limits will make sense and where they will cause problems.
It may not happen all that often, but a single user who starts too many processes can make a system unusable for everyone else. A fork bomb -- a denial of service attack in which a process continually replicates itself until available resources are depleted -- is a worst case of this. However, even friendly users can use more resources than is good for a system -- often without intending to. At the same time, legitimate processes can sometimes fail when they are run against limits that are designed for average users. In this case, you need to make sure that these processes get beefed up allocations of system resources that will allow them to run properly without making the same resources available for everyone.
To see the limits associate with your login, use the command ulimit -a. If you're using a regular user account, you will likely see something like this:
$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 32767 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 50 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
This makes sense for most users since they probably wouldn't do anything more with a core dump other than erase it, but if you need a core dump to debug problems you are running into with an application, you might want to set your core file size to unlimited -- and maybe you can.
$ ulimit -c ulimited $ ulimit -c unlimitedIf you are managing a server and want to turn on the ability to generate core dumps for all of your users -- perhaps they're developers are really need to be able to analyze these core dumps, you have to switch user to root and edit your /etc/security/limits.conf (Linux) or make changes in your /etc/system (Solaris) file.
If, on the other hand, you are managing a server and don't want any of your users able to generate core dumps regardless of how much they'd like to sink their teeth into one, you can set a limit of 0 in your limits.conf.
$ ulimit -u 50Once again, your users can change their limits with another ulimit command -- ulimit -u 100 -- unless, of course, they can't. If you have limited them to 50 processes in the limits.conf or system file, they will get an error like this when they try to increase their limits:
$ ulimit -u 100 -bash: ulimit: max user processes: cannot modify limit: Operation not permittedLimits can also be set up by group so that you can, say, give developers the ability to run more processes than managers. Lines like these in your limits.conf file would do that:
@managers hard nproc 50 @developers hard nproc 200If you want to limit the number of open files, you just use a different setting.
@managers hard nofile 2048 @developers hard nofile 8192 sbob hard nofile 8192Here we've given two groups and one individual increases in their open files limits. These all set hard limits. If you set soft limits as well, the users will get warnings when they reach the lower limit.
@developers soft nofile 2048 @developers hard nofile 8192To see a list of the ulimit options, look at the man page (man ulimit).
You will note that ulimit is a bash built-in -- at least on Linux -- and that the following options are available:
-a All current limits are reported -c The maximum size of core files created -d The maximum size of a process's data segment -e The maximum scheduling priority ("nice") -f The maximum size of files written by the shell and its children -i The maximum number of pending signals -l The maximum size that may be locked into memory -m The maximum resident set size (has no effect on Linux) -n The maximum number of open file descriptors (most systems do not allow this value to be set) -p The pipe size in 512-byte blocks (this may not be set) -q The maximum number of bytes in POSIX message queues -r The maximum real-time scheduling priority -s The maximum stack size -t The maximum amount of cpu time in seconds -u The maximum number of processes available to a single user -v The maximum amount of virtual memory available to the shellIf you limits.conf file permits, you might see limits like these set up for particular applications that really need the extra capacity. In this example, the oracle use is begin given the ability to run up to 16,384 processes and open 65,536 files. These lines would be set up in the oracle user's .bash_profile.
if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fiSetting limits can provide a defense against processes that go haywire and malicious processes that try to make your systems unusable. Just make sure that your limits work for you and not against you as you plan how your resources can best be allocated.
No comments:
Post a Comment