yum -y install gmp-devel
wget http://freshmeat.net/redir/clamav/29355/url_tgz/clamav-0.97.3.tar.gz
adduser -M -s /bin/false clamav
tar zxf clamav-0.97.3.tar.gz
cd clamav-0.97.3
./configure --prefix=/usr/local/clamav
make install
for binaries in `find /usr/local/clamav/bin/*` ; do ln -s ${binaries} /usr/bin/; done
Read more: http://crazytoon.com/2007/10/19/linux-virus-scan-how-do-i-check-my-linux-installation-for-viruses-using-clamav-centos-linux-redhat/#ixzz1nf1gPXFp
At this point Clam AntiVirus is installed and ready for use. Edit the configuration file and remove the line which says: Example It is there to ensure. If you want, you can look at other options but we don’t need to change anything else here to make ClamAV work for us.
vi /usr/local/clamav/etc/freshclam.conf #remove Example
Now let us run the freshclam which will download virus database and bring our virus database up to date. We should do this manually and make sure it didn’t give any errors. If this works, you will a lot of “downloading” messages.
/usr/bin/freshclam
If everything checks out, let us add this to our crontab to ensure our virus database is updated hourly. I chose to be updated every 9 minutes in to every hour. You can change to fit your needs or leave it as it is.
crontab -e
9 * * * * /usr/bin/freshclam –quiet
At this point our ClamAV virus database is up to date and now we can scan whichever directory we want. Go to the directory you want to scan and type:
clamscan -r -i
Once it is done scanning, it will display something similar to below.
-r parameter tells clamscan to recurse into directories
-i will print out infected filenames
Read more: http://crazytoon.com/2007/10/19/linux-virus-scan-how-do-i-check-my-linux-installation-for-viruses-using-clamav-centos-linux-redhat/#ixzz1nf1lINvr
SSH into the Linux server.
Change the directory to the blog directory.
Backup the MySQL database by running the following command substituting in your own information (where root is the MySQL root username and [databasename] is the actual WordPress database being used for the site):
mysqldump –u root –p [databasename] > [databasename].sql
Download the latest version of Wordpress by running this command:
wget http://wordpress.org/latest.tar.gz
Unzip the download.
tar -xzvf latest.tar.gz
Make a backup of your data just in case something goes wrong or if you have custom content where blogdirectory is the directory the blog is installed):
tar -czvf blog_backup.tgz blogdirectory/
Overwrite the files, thus upgrading the blog, by running this command (where blogdirectory is the directory the blog is installed):
yes | cp -r wordpress/* blogdirectory/
Now go to your blog admin section and verify everything is correct. You may be asked when viewing the admin section to upgrade the database. Click OK to do so.
Your blog is now upgraded to the latest version.
Change the directory to the blog directory.
Backup the MySQL database by running the following command substituting in your own information (where root is the MySQL root username and [databasename] is the actual WordPress database being used for the site):
mysqldump –u root –p [databasename] > [databasename].sql
Download the latest version of Wordpress by running this command:
wget http://wordpress.org/latest.tar.gz
Unzip the download.
tar -xzvf latest.tar.gz
Make a backup of your data just in case something goes wrong or if you have custom content where blogdirectory is the directory the blog is installed):
tar -czvf blog_backup.tgz blogdirectory/
Overwrite the files, thus upgrading the blog, by running this command (where blogdirectory is the directory the blog is installed):
yes | cp -r wordpress/* blogdirectory/
Now go to your blog admin section and verify everything is correct. You may be asked when viewing the admin section to upgrade the database. Click OK to do so.
Your blog is now upgraded to the latest version.
To password protect your website, please follow these steps:
Log into your Linux web server via Secure Shell (SSH).
Change into the directory you wish to password protect.
Note: If you wish to protect your entire website us the following command:
cd /vservers/username/htdocs
Create a file called .htaccess using the following command:
pico .htaccess
Enter the following information:
AuthType Basic
AuthName "Please enter your Username and Password"
AuthUserFile /vservers/username/htdocs/.htpasswd
AuthGroupFile /dev/null
Require valid-user
Press ctrl+o to save the file.
Press ctrl+x to exit the file.
Create the .htpasswd file using the following command:
/usr/bin/htpasswd -c /vservers/username/htdocs/.htpasswd username
Enter the password you wish to use.
Re-enter the password.
Grant read access to each file using the following commands:
chmod a+r .htaccess
chmod a+r .htpasswd
Log into your Linux web server via Secure Shell (SSH).
Change into the directory you wish to password protect.
Note: If you wish to protect your entire website us the following command:
cd /vservers/username/htdocs
Create a file called .htaccess using the following command:
pico .htaccess
Enter the following information:
AuthType Basic
AuthName "Please enter your Username and Password"
AuthUserFile /vservers/username/htdocs/.htpasswd
AuthGroupFile /dev/null
Require valid-user
Press ctrl+o to save the file.
Press ctrl+x to exit the file.
Create the .htpasswd file using the following command:
/usr/bin/htpasswd -c /vservers/username/htdocs/.htpasswd username
Enter the password you wish to use.
Re-enter the password.
Grant read access to each file using the following commands:
chmod a+r .htaccess
chmod a+r .htpasswd
Create a notepad file and save it as .htaccess if you do not already have an existing one.
Update the .htaccess file with the following code and save; be sure to replace domain.com with your domain name.
RewriteEngine On
RewriteCond %{HTTP_HOST} ^domain.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/$1 [L,R=301]
Upload the .htaccess file via ftp to the site root and now your traffic will be redirected to www.domain.com.
Update the .htaccess file with the following code and save; be sure to replace domain.com with your domain name.
RewriteEngine On
RewriteCond %{HTTP_HOST} ^domain.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/$1 [L,R=301]
Upload the .htaccess file via ftp to the site root and now your traffic will be redirected to www.domain.com.
NFS, or Network File System, is a protocol for sharing and mounting remote file systems over a network.
Installation
To run an NFS server on Redhat or CentOS Linux, the systems package 'nfs-utils' must be installed. The 'yum' package manager can be used to ensure this is installed.
yum install nfs-utils
Configure Shares
NFS shares are created via the /etc/exports configuration file. To create a share you must specify a path to share, as well as a list of hosts to grant access and the type of access they should have. The path to share must be a full system path, and the list of hosts can be specified as a single host (IP, FQDN, or hostname), with wildcards (*.domain.com), IP networks (1.2.3.4/24), or netgroups.
For example, to share the directory '/data' with the host 10.10.1.5 (read-only) and the hosts within 10.10.1.15/29 (read-write), the following line would be added to /etc/exports.
/data 10.10.1.5(ro) 10.10.1.15/29(rw)
The list of access hosts must be separated with spaces, and no space must exist between the host address and the opening '(' of its options. If no options are specified, the host will have read only access. If options are specified without a host preceding it, they will be the share's default.
For a full list of options, please refer to the exports man page.
man exports
Starting the NFS Server
The NFS server is controlled with the service init script '/etc/init.d/nfs', or through the 'service' command. However before nfsd can run, the 'portmap' service must be running as well.
service portmap start
service nfs start
The NFS server will now be running, and shares (or 'exports') can be mounted by remote hosts that are given access. If the exports file is modified after the service is started, you can apply them with the command
exportfs -r
Also be sure to use the 'chkconfig' command to add both services to the system runlevel to run on startup.
chkconfig portmap on
chkconfig nfs on
Firewall notes
If your NFS server is behind a firewall (hardware or iptables), relative to the connecting hosts, the port "2049" must be opened for TCP and UDP.
Installation
To run an NFS server on Redhat or CentOS Linux, the systems package 'nfs-utils' must be installed. The 'yum' package manager can be used to ensure this is installed.
yum install nfs-utils
Configure Shares
NFS shares are created via the /etc/exports configuration file. To create a share you must specify a path to share, as well as a list of hosts to grant access and the type of access they should have. The path to share must be a full system path, and the list of hosts can be specified as a single host (IP, FQDN, or hostname), with wildcards (*.domain.com), IP networks (1.2.3.4/24), or netgroups.
For example, to share the directory '/data' with the host 10.10.1.5 (read-only) and the hosts within 10.10.1.15/29 (read-write), the following line would be added to /etc/exports.
/data 10.10.1.5(ro) 10.10.1.15/29(rw)
The list of access hosts must be separated with spaces, and no space must exist between the host address and the opening '(' of its options. If no options are specified, the host will have read only access. If options are specified without a host preceding it, they will be the share's default.
For a full list of options, please refer to the exports man page.
man exports
Starting the NFS Server
The NFS server is controlled with the service init script '/etc/init.d/nfs', or through the 'service' command. However before nfsd can run, the 'portmap' service must be running as well.
service portmap start
service nfs start
The NFS server will now be running, and shares (or 'exports') can be mounted by remote hosts that are given access. If the exports file is modified after the service is started, you can apply them with the command
exportfs -r
Also be sure to use the 'chkconfig' command to add both services to the system runlevel to run on startup.
chkconfig portmap on
chkconfig nfs on
Firewall notes
If your NFS server is behind a firewall (hardware or iptables), relative to the connecting hosts, the port "2049" must be opened for TCP and UDP.
Network File System shares, or NFS exports, are easily mounted to Linux servers. Before an NFS export can be mounted, the nfs-utils package must be installed.
yum install nfs-utils
Additionally, the 'portmap' service must be running and enabled.
service portmap start
chkconfig portmap on
To mount the share, you'll need the following information.
The hostname, domain name, or IP of the NFS server.
The full path of the export on the server.
Also, be sure that the NFS server allows access from the host for the desired export.
Manually Mounting
Mounting NFS shares to the client is done using the mount command, in a similar fashion to mounting local file systems.
mount -t nfs4 -o options host:/remote/path/ /local/mount/point
For example: To mount an export '/var/data' on host '10.10.1.10' to the local directory '/remote/data', the following command would be used.
mount -t nfs4 10.10.1.10:/var/data /remote/data
The '-t nfs4' can usually be dropped completely. You'll now be able to access the remote directory under '/remote/data' as though it were part of the local filesystem.
For information on the various mount options, which can improve transfer performance and permissions, refer to the nfs man page.
man nfs
Mount via fstab
NFS mounts can also be specified within the /etc/fstab file for ease of mounting, and mounting on system startup. The format is the same as with local mounts.
host:/remote/path /local/mount/point nfs4 defaults 0 0
For the previously used example export, this would be:
10.10.1.10:/var/data /remote/data nfs4 defaults 0 0
yum install nfs-utils
Additionally, the 'portmap' service must be running and enabled.
service portmap start
chkconfig portmap on
To mount the share, you'll need the following information.
The hostname, domain name, or IP of the NFS server.
The full path of the export on the server.
Also, be sure that the NFS server allows access from the host for the desired export.
Manually Mounting
Mounting NFS shares to the client is done using the mount command, in a similar fashion to mounting local file systems.
mount -t nfs4 -o options host:/remote/path/ /local/mount/point
For example: To mount an export '/var/data' on host '10.10.1.10' to the local directory '/remote/data', the following command would be used.
mount -t nfs4 10.10.1.10:/var/data /remote/data
The '-t nfs4' can usually be dropped completely. You'll now be able to access the remote directory under '/remote/data' as though it were part of the local filesystem.
For information on the various mount options, which can improve transfer performance and permissions, refer to the nfs man page.
man nfs
Mount via fstab
NFS mounts can also be specified within the /etc/fstab file for ease of mounting, and mounting on system startup. The format is the same as with local mounts.
host:/remote/path /local/mount/point nfs4 defaults 0 0
For the previously used example export, this would be:
10.10.1.10:/var/data /remote/data nfs4 defaults 0 0
Nginx is a lightweight, fast, and efficient web and proxy server. It is one of the fastest static conent web servers available, and can also be deployed to delivery dynamic content through a FastCGI interface. In addition, nginx is a very powerful reverse proxy (frontend) server and very capable software load balancer.
For a full listing of features, please refer to http://nginx.org.
Installation
Installation of the latest stable release of nginx can be done easily with the EPEL (Extra Packages for Enterprise Linux) package repository. To use this repository, execute the following as the root user:
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
Once the EPEL repository is in use, nginx can be easily installed through yum.
yum install nginx
Configuration
Nginx uses the configuration file /etc/nginx/nginx.conf, which can be edited using nano. Virtual hosts (web sites) are configured in this file with 'server' code blocks, which are located under the main 'http' block. The default 'server' block will listen on port 80, and has a document (web root) of '/usr/share/nginx/html'.
To help explain configuration of the server block, below is a very basic entry.
server {
listen 80;
server_name www.domain.com domain.com;
location / {
root /var/www/domain.com/html;
index index.html index.htm;
}
}
listen: Specifies the port on which this virtual host listens.
server: Lists the host headers for the site.
location /: Specifies how to handle requests under the location '/', the site root.
root: The document root for the site.
index: An ordered priority list of default documents.
For further details and examples on nginx's configuration, please refer to their offical wiki page and core documentation.
Note: If your server currently is configured with another web server, you'll likely need to have Nginx listen on a port other than 80. This is done simply by editing the 'listen' setting in the default server block, as well as any additional server blocks that are created.
To test your configuration, execute the following and it will report on any errors.
/usr/sbin/nginx -t
Starting and Testing
Once you've setup a working configuration, you can start the nginx server.
/etc/init.d/nginx start
If using the default document root (/usr/share/nginx/html), visiting your server's IP in a browser should yield the default server page.
nginx_test
Now that Nginx is running successfully, you'll want to be sure it's added to the default run level so it will start automatically at boot time.
chkconfig nginx on
For a full listing of features, please refer to http://nginx.org.
Installation
Installation of the latest stable release of nginx can be done easily with the EPEL (Extra Packages for Enterprise Linux) package repository. To use this repository, execute the following as the root user:
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
Once the EPEL repository is in use, nginx can be easily installed through yum.
yum install nginx
Configuration
Nginx uses the configuration file /etc/nginx/nginx.conf, which can be edited using nano. Virtual hosts (web sites) are configured in this file with 'server' code blocks, which are located under the main 'http' block. The default 'server' block will listen on port 80, and has a document (web root) of '/usr/share/nginx/html'.
To help explain configuration of the server block, below is a very basic entry.
server {
listen 80;
server_name www.domain.com domain.com;
location / {
root /var/www/domain.com/html;
index index.html index.htm;
}
}
listen: Specifies the port on which this virtual host listens.
server: Lists the host headers for the site.
location /: Specifies how to handle requests under the location '/', the site root.
root: The document root for the site.
index: An ordered priority list of default documents.
For further details and examples on nginx's configuration, please refer to their offical wiki page and core documentation.
Note: If your server currently is configured with another web server, you'll likely need to have Nginx listen on a port other than 80. This is done simply by editing the 'listen' setting in the default server block, as well as any additional server blocks that are created.
To test your configuration, execute the following and it will report on any errors.
/usr/sbin/nginx -t
Starting and Testing
Once you've setup a working configuration, you can start the nginx server.
/etc/init.d/nginx start
If using the default document root (/usr/share/nginx/html), visiting your server's IP in a browser should yield the default server page.
nginx_test
Now that Nginx is running successfully, you'll want to be sure it's added to the default run level so it will start automatically at boot time.
chkconfig nginx on
The .htaccess file is pretty handy as it can also allow you to set various PHP flags that you may not want enable for all of you websites and in turn will let you run just about every PHP flag or directive on a one on one website basis.
You will need to log into your Linux server directly using SSH. If you are not familiar on how to do this, we have articles on how to implement this on both Windows and Mac;
Windows - http://www.hosting.com/support/linux/general/sshwindows
Mac - http://www.hosting.com/support/linux/general/sshmac
To create the .Htaccess file, you meerly need to create a text file with any Linux editor. The file must be called .htaccess and it must exist in the root directory of the website you want override the Php global directives for.
The format of the .htaccess is simple, however you will want to comment out exactly what flag or directive you are enabling for the site. An example of this would be as seen below;
#Turn register goabls of
php_value register_globals off
#Php max upload size
php_value upload_max_filesize 12M
php_value post_max_size 12M
#Enable another version of php if you have two installed (Enable Php 5)
AddType application/x-httpd-php5 .htm .html .php
Make sure you save the file when you are done editing it.
Just about any directive or flag can be added to the .htaccess file so you can commently customize the php functions of the website instead of changing it globally in the php.ini file. For a list of what you can use, please review the PHP manual at http://php.net/manual/en/index.php.
You will need to log into your Linux server directly using SSH. If you are not familiar on how to do this, we have articles on how to implement this on both Windows and Mac;
Windows - http://www.hosting.com/support/linux/general/sshwindows
Mac - http://www.hosting.com/support/linux/general/sshmac
To create the .Htaccess file, you meerly need to create a text file with any Linux editor. The file must be called .htaccess and it must exist in the root directory of the website you want override the Php global directives for.
The format of the .htaccess is simple, however you will want to comment out exactly what flag or directive you are enabling for the site. An example of this would be as seen below;
#Turn register goabls of
php_value register_globals off
#Php max upload size
php_value upload_max_filesize 12M
php_value post_max_size 12M
#Enable another version of php if you have two installed (Enable Php 5)
AddType application/x-httpd-php5 .htm .html .php
Make sure you save the file when you are done editing it.
Just about any directive or flag can be added to the .htaccess file so you can commently customize the php functions of the website instead of changing it globally in the php.ini file. For a list of what you can use, please review the PHP manual at http://php.net/manual/en/index.php.
Using a .htaccess file will grant you greater control over your website, such as safe guarding it from hacking attempts or even keeping out spammers who may frequent your website. A great feature of using a .htaccess file is the ability to populate it with single IP addresses or entire IP ranges, effectively blocking those IP's from being able to access your server. This article will explain how to implement this via the .htaccess file.
You will need to log into your Linux server directly using SSH. If you are not familiar on how to do this, we have articles on how to implement this on both Windows and Mac.
To create the .htaccess file, you need to create a text file with any Linux editor. The file must be called .htaccess and it must exist in the root directory of the website you want to deny access to.
In the file, there is a specify format you must adhere to. To block both single IP's and IP ranges, you must include the following;
order allow,deny - The rule set
deny from 192.168.1 - IP you want to block
deny from 24.0.0.0/23 - IP Range you want to block
Please note to block an IP Range, you must know the subnet.
You can also specify a deny all and allow rule set as well;
order deny,allow - The rule set
deny from all - Deny access from all IP's.
Allow from 192.12.4.1 - IP you want to allow.
An example of a file blocking IP address 1.2.3.4 and a subnet 2.0.x.x is below:
order allow,deny
allow from all
deny from 1.2.3.4
deny from 2.0.
You will need to log into your Linux server directly using SSH. If you are not familiar on how to do this, we have articles on how to implement this on both Windows and Mac.
To create the .htaccess file, you need to create a text file with any Linux editor. The file must be called .htaccess and it must exist in the root directory of the website you want to deny access to.
In the file, there is a specify format you must adhere to. To block both single IP's and IP ranges, you must include the following;
order allow,deny - The rule set
deny from 192.168.1 - IP you want to block
deny from 24.0.0.0/23 - IP Range you want to block
Please note to block an IP Range, you must know the subnet.
You can also specify a deny all and allow rule set as well;
order deny,allow - The rule set
deny from all - Deny access from all IP's.
Allow from 192.12.4.1 - IP you want to allow.
An example of a file blocking IP address 1.2.3.4 and a subnet 2.0.x.x is below:
order allow,deny
allow from all
deny from 1.2.3.4
deny from 2.0.
VNstat is a bandwidth monitoring tool that will let you monitor your bandwidth and provide daily, weekly and monthly tracking metrics.
To install and setup Vnstat, you will need to follow the steps in this article.
Log into your linux server and type the following command.wget http://humdi.net/vnstat/vnstat-1.10.tar.gz
Next to uncompress the file you will need to run the following command. tar -zxvf vnstat-1.10.tar.gz
Now that you have the file uncompressed, you will need to install Vnstat. To do this, you will need to navigate to the VNstat directory and run the following command.
make & make install
Now that it has been installed, you will first need to run "vnstat --iflist" on your server, so you know the name of your network adaptors. vnstat --iflist
You will now be presented with the any available adaptors on your server. Available interfaces lo eth0 sit0
Now that you know what adaptors you have on your server, you will need to tell VNstat to create a small database for that adapator. To do this, simply type /usr/bin/vnstat -u -i then the adaptor name. So for our example, we would run the following /usr/bin/vnstat -u -i eth0 If you have multiple adaptors, you will need to do this for each adaptor. However you do not have to if you only want to monitor specific adaptors.
Next, make sure that Vnstat restarts whenever you reboot your server. To do this, add the service to your server's "Chkconfig" list by running the following command.
chkconfig --add vnstat
chkconfig vnstat on
Now you can access Vnstat by logging into the server at anytime and typing "vnstat". Doing so will give you a realtime status of your bandwidth.
To install and setup Vnstat, you will need to follow the steps in this article.
Log into your linux server and type the following command.wget http://humdi.net/vnstat/vnstat-1.10.tar.gz
Next to uncompress the file you will need to run the following command. tar -zxvf vnstat-1.10.tar.gz
Now that you have the file uncompressed, you will need to install Vnstat. To do this, you will need to navigate to the VNstat directory and run the following command.
make & make install
Now that it has been installed, you will first need to run "vnstat --iflist" on your server, so you know the name of your network adaptors. vnstat --iflist
You will now be presented with the any available adaptors on your server. Available interfaces lo eth0 sit0
Now that you know what adaptors you have on your server, you will need to tell VNstat to create a small database for that adapator. To do this, simply type /usr/bin/vnstat -u -i then the adaptor name. So for our example, we would run the following /usr/bin/vnstat -u -i eth0 If you have multiple adaptors, you will need to do this for each adaptor. However you do not have to if you only want to monitor specific adaptors.
Next, make sure that Vnstat restarts whenever you reboot your server. To do this, add the service to your server's "Chkconfig" list by running the following command.
chkconfig --add vnstat
chkconfig vnstat on
Now you can access Vnstat by logging into the server at anytime and typing "vnstat". Doing so will give you a realtime status of your bandwidth.
Mumble is an opensource, low-latency, high quality VoIP voice chat software that is primarily intended for gaming. It's a free and worthy (often consider superior) alternative to more widely used software such as Teamspeak or Ventrilo.
Mumble's server component, known as 'murmur', is not provided in rpm format for easy installation to Redhat or CentOS server. The software is also not provide by the default or common third party yum repositories. However, installation can be done using a static binary package that is available.
Download
Before we can extract needed files, the lzma compression tool must be installed. This can easily be done using yum.
yum install lzma
Now, both the static binary package and an RPM from another Linux distribution must be downloaded to the server.
wget http://sourceforge.net/projects/mumble/files/Mumble/1.2.3/murmur-static_x86-1.2.3.tar.bz2/download
wget ftp://rpmfind.net/linux/Mandriva/devel/cooker/x86_64/media/contrib/release/mumble-server-1.2.2-3mdv2011.0.x86_64.rpm
Extract and Install Executable
Next, the murmur executable must be extract from the static binary package and installed into the server's PATH.
tar -xjf murmur-static_x86-1.2.3.tar.bz2
cp murmur-static_x86-1.2.3/murmur.x86 /usr/sbin/murmurd
Extract and Install Service Script and Config
To obtain a init startup script as well as a default config file, the downloaded rpm must be extracted.
rpm2cpio mumble-server-1.2.2-3mdv2011.0.x86_64.rpm > file.lzma
lzma -d file.lzma
mkdir mumble-rpm
cd mumble-rpm
cpio -imv --make-directories < ../file
rm ../file
The extracted files can now be moved into the system.
cp etc/mumble-server.ini /etc
cp etc/rc.d/init.d/mumble-server /etc/rc.d/init.d
chmod a+x /etc/rc.d/init.d/mumble-server
Next, a system user for murmur must be created and assigned to required directories and files.
upadd -g 4000 mumble-server useradd -g 4000 -G mumble-server -s /sbin/nologin -d / -M mumble-server mkdir /var/lib/mumble-server chown mumble-server:mumble-server /var/lib/mumble-server
mkdir /var/log/mumble-server
chown mumble-server:mumble-server
/var/log/mumble-server
Starting Mumble Server
Due to difference between Redhat/CentOS init scripts and the distribution the extracted RPM was built for, we must modify the installed init.d scripts
Execute:
nano /etc/rc.d/init.d/mumble-server
In nano editor, type CTRL+\ to bring up the search and replace tool. Search for the text 'gprintf', and replace with 'printf'. Select 'A' to replace all instances, then CTRL+o to save, and CTRL+x to exit.
The mumble server can now be started.
service mumble-server start
You may now connect your mumble client to your server using your server's IP and the default mumble port.
Run at System Boot
To have mumble server start at system boot, simply add the init script to the run level with ckconfig.
chkconfig --add mumble-server
chkconfig mumble-server on
Mumble's server component, known as 'murmur', is not provided in rpm format for easy installation to Redhat or CentOS server. The software is also not provide by the default or common third party yum repositories. However, installation can be done using a static binary package that is available.
Download
Before we can extract needed files, the lzma compression tool must be installed. This can easily be done using yum.
yum install lzma
Now, both the static binary package and an RPM from another Linux distribution must be downloaded to the server.
wget http://sourceforge.net/projects/mumble/files/Mumble/1.2.3/murmur-static_x86-1.2.3.tar.bz2/download
wget ftp://rpmfind.net/linux/Mandriva/devel/cooker/x86_64/media/contrib/release/mumble-server-1.2.2-3mdv2011.0.x86_64.rpm
Extract and Install Executable
Next, the murmur executable must be extract from the static binary package and installed into the server's PATH.
tar -xjf murmur-static_x86-1.2.3.tar.bz2
cp murmur-static_x86-1.2.3/murmur.x86 /usr/sbin/murmurd
Extract and Install Service Script and Config
To obtain a init startup script as well as a default config file, the downloaded rpm must be extracted.
rpm2cpio mumble-server-1.2.2-3mdv2011.0.x86_64.rpm > file.lzma
lzma -d file.lzma
mkdir mumble-rpm
cd mumble-rpm
cpio -imv --make-directories < ../file
rm ../file
The extracted files can now be moved into the system.
cp etc/mumble-server.ini /etc
cp etc/rc.d/init.d/mumble-server /etc/rc.d/init.d
chmod a+x /etc/rc.d/init.d/mumble-server
Next, a system user for murmur must be created and assigned to required directories and files.
upadd -g 4000 mumble-server useradd -g 4000 -G mumble-server -s /sbin/nologin -d / -M mumble-server mkdir /var/lib/mumble-server chown mumble-server:mumble-server /var/lib/mumble-server
mkdir /var/log/mumble-server
chown mumble-server:mumble-server
/var/log/mumble-server
Starting Mumble Server
Due to difference between Redhat/CentOS init scripts and the distribution the extracted RPM was built for, we must modify the installed init.d scripts
Execute:
nano /etc/rc.d/init.d/mumble-server
In nano editor, type CTRL+\ to bring up the search and replace tool. Search for the text 'gprintf', and replace with 'printf'. Select 'A' to replace all instances, then CTRL+o to save, and CTRL+x to exit.
The mumble server can now be started.
service mumble-server start
You may now connect your mumble client to your server using your server's IP and the default mumble port.
Run at System Boot
To have mumble server start at system boot, simply add the init script to the run level with ckconfig.
chkconfig --add mumble-server
chkconfig mumble-server on
Configure the YUM repository to look in the correct location for an upgraded PHP RPM. There are many different repositories online that offer upgraded RPMs for PHP so a quick search via Google will be of assistance. For this article we'll be referencing the YUM repositories at webtatic.com
Tell rpm to accept rpm's signed by webtatic
rpm --import http://repo.webtatic.com/yum/RPM-GPG-KEY-webtatic-andy
Add the yum repository information to yum
wget -P /etc/yum.repos.d/ http://repo.webtatic.com/yum/webtatic.repo
Update the existing installation of PHP which will also update all of the other PHP modules installed
yum --enablerepo=webtatic update php
Type Y and let the process complete.
You can now see your current version of PHP using php –v from the shell prompt.
Tell rpm to accept rpm's signed by webtatic
rpm --import http://repo.webtatic.com/yum/RPM-GPG-KEY-webtatic-andy
Add the yum repository information to yum
wget -P /etc/yum.repos.d/ http://repo.webtatic.com/yum/webtatic.repo
Update the existing installation of PHP which will also update all of the other PHP modules installed
yum --enablerepo=webtatic update php
Type Y
You can now see your current version of PHP using php –v from the shell prompt.
By default the Linux OS has a very efficient memory management process that should be freeing any cached memory on the machine that it is being run on. However when it comes to Cached memory the Linux OS may at times decide that the Cached memory is being used and is needed which can lead to memory related issues and ultimately rob your server of any potentially free memory. To combat this you can force the Linux OS to free up and stored Cached memory.
Connect via shell using a program such as Putty
At the shell prompt type crontab -e as this will allow you to edit cron jobs for the root user.
If you are not familiar with vi (linux editor) you press "i" to insert text and once done hit "esc" and type ":wq" to save the file.
Scroll to the bottom of the cron file using the arrows key and enter the following line:
0 * * * * /root/clearcache.sh
Create a file in '/root' called 'clearcache.sh' with the following content:
#!/bin/sh
sync; echo 3 > /proc/sys/vm/drop_caches
Once you have saved this file, the job is complete!
Every hour the cron job will run this command and clear any memory cache that has built up.
An example from a test server before and after running this task.
BEFORE:
AFTER:
Note before the server was using 1.918Gb of RAM with 1.4983Gb in Cache and after running the script the server is now only using 172Mb of RAM and only 38.9Gb in Cache.
Connect via shell using a program such as Putty
At the shell prompt type crontab -e
If you are not familiar with vi (linux editor) you press "i" to insert text and once done hit "esc" and type ":wq" to save the file.
Scroll to the bottom of the cron file using the arrows key and enter the following line:
0 * * * * /root/clearcache.sh
Create a file in '/root' called 'clearcache.sh' with the following content:
#!/bin/sh
sync; echo 3 > /proc/sys/vm/drop_caches
Once you have saved this file, the job is complete!
Every hour the cron job will run this command and clear any memory cache that has built up.
An example from a test server before and after running this task.
BEFORE:
AFTER:
Note before the server was using 1.918Gb of RAM with 1.4983Gb in Cache and after running the script the server is now only using 172Mb of RAM and only 38.9Gb in Cache.
The network scripts are located in /etc/sysconfig/network-scripts/. Go into that directory.
cd /etc/sysconfig/network-scripts/
The file we're interested in is ifcfg-eth0, the interface for the Ethernet device. Let's assume we want to bind three additional IP's (192.168.1.111, 192.168.1.112, and 192.168.1.113) to the NIC. We need to create three alias files while ifcfg-eth0 maintains the primary IP address. This is how we'll set up the aliases to bind the IP addresses.
Adapter IP Address Type
-----------------------------------
eth0 192.168.1.110 Primary
eth0:0 192.168.1.111 Alias 1
eth0:1 192.168.1.112 Alias 2
eth0:2 192.168.1.113 Alias 3
The :X (where X is the interface number) is appended to the interface file name to create the alias. For each alias you create you assign a number sequentially. For this example we will create aliases for eth0. Make a copy of ifcfg-eth0 for the three aliases.
cp ifcfg-eth0 ifcfg-eth0:0
cp ifcfg-eth0 ifcfg-eth0:1
cp ifcfg-eth0 ifcfg-eth0:2
Take a look inside ifcfg-eth0 and review the contents.
more ifcfg-eth0
We're interested in only two lines (DEVICE and IPADDR). We'll rename the device in each file to its corresponding interface alias and change the IP's. We'll start with ifcfg-eth0:0. Open ifcfg-eth0:0 in vi and change the two lines so they have the new interface and IP address.
vi ifcfg-eth0:0
DEVICE=eth0:0
IPADDR=192.168.1.111
Save ifcfg-eth0:0 and edit the other two alias files (ifcfg-eth0:1 and ifcfg-eth0:2) so they have the new interfaces and IP addresses set (follow the table from above). Once you save all your changes you can restart the network for the changes to take effect.
service network restart
To verify all the aliases are up and running you can run ifconfig (depending on how many new IP's you set up, you can use ifconfig | more to pause the output).
ifconfig
You can also test the IP's by pinging them from a different machine. If everything is working then there should be a response back.
ping 192.168.1.111
ping 192.168.1.112
ping 192.168.1.113
cd /etc/sysconfig/network-scripts/
The file we're interested in is ifcfg-eth0, the interface for the Ethernet device. Let's assume we want to bind three additional IP's (192.168.1.111, 192.168.1.112, and 192.168.1.113) to the NIC. We need to create three alias files while ifcfg-eth0 maintains the primary IP address. This is how we'll set up the aliases to bind the IP addresses.
Adapter IP Address Type
-----------------------------------
eth0 192.168.1.110 Primary
eth0:0 192.168.1.111 Alias 1
eth0:1 192.168.1.112 Alias 2
eth0:2 192.168.1.113 Alias 3
The :X (where X is the interface number) is appended to the interface file name to create the alias. For each alias you create you assign a number sequentially. For this example we will create aliases for eth0. Make a copy of ifcfg-eth0 for the three aliases.
cp ifcfg-eth0 ifcfg-eth0:0
cp ifcfg-eth0 ifcfg-eth0:1
cp ifcfg-eth0 ifcfg-eth0:2
Take a look inside ifcfg-eth0 and review the contents.
more ifcfg-eth0
We're interested in only two lines (DEVICE and IPADDR). We'll rename the device in each file to its corresponding interface alias and change the IP's. We'll start with ifcfg-eth0:0. Open ifcfg-eth0:0 in vi and change the two lines so they have the new interface and IP address.
vi ifcfg-eth0:0
DEVICE=eth0:0
IPADDR=192.168.1.111
Save ifcfg-eth0:0 and edit the other two alias files (ifcfg-eth0:1 and ifcfg-eth0:2) so they have the new interfaces and IP addresses set (follow the table from above). Once you save all your changes you can restart the network for the changes to take effect.
service network restart
To verify all the aliases are up and running you can run ifconfig (depending on how many new IP's you set up, you can use ifconfig | more to pause the output).
ifconfig
You can also test the IP's by pinging them from a different machine. If everything is working then there should be a response back.
ping 192.168.1.111
ping 192.168.1.112
ping 192.168.1.113
compressing javascript and css files with htaccess
Compressing is one of those optimizations for a website which can be easily done and still has tremendous effect. Gzip compressing your files results in less bandwidth being used and a lot less data that needs to travel all the way to the impatient client.
gzip compressing is a piece of cake when you can use htaccess. You can put the code below in your .htaccess file:
SetOutputFilter DEFLATE
< /Files>
SetOutputFilter DEFLATE
< /Files>
This will use an output filter to compress the files. All .js and .css files within the jurisdiction of this .htaccess file will be sent compressed to the client.
compressing HTML within your PHP files
Compressing is one of those optimizations for a website which can be easily done and still has tremendous effect. Gzip compressing your files results in less bandwidth being used and a lot less data that needs to travel all the way to the impatient client.
gzip compressing your PHP files is a piece of cake with Output Buffering (OB). OB buffers everything you output in your scripts, and releases it when you want it to. This neat little trick can come in handy when you want to send headers from within your HTML, since the headers are ordered to be released before the HTML when OB is used.
OB can gzip compress the HTML buffered, so you can put this above everything in your PHP file:
if (substr_count($_SERVER["HTTP_ACCEPT_ENCODING"], "gzip"))
ob_start("ob_gzhandler");
else
ob_start();
It checks whether gzip is supported, and then starts output buffering with gzip compression. To release all compressed output you can put this at the bottom of the PHP file:
ob_end_flush();
This flushes all output to the browser.
Compressing is one of those optimizations for a website which can be easily done and still has tremendous effect. Gzip compressing your files results in less bandwidth being used and a lot less data that needs to travel all the way to the impatient client.
gzip compressing is a piece of cake when you can use htaccess. You can put the code below in your .htaccess file:
SetOutputFilter DEFLATE
< /Files>
SetOutputFilter DEFLATE
< /Files>
This will use an output filter to compress the files. All .js and .css files within the jurisdiction of this .htaccess file will be sent compressed to the client.
compressing HTML within your PHP files
Compressing is one of those optimizations for a website which can be easily done and still has tremendous effect. Gzip compressing your files results in less bandwidth being used and a lot less data that needs to travel all the way to the impatient client.
gzip compressing your PHP files is a piece of cake with Output Buffering (OB). OB buffers everything you output in your scripts, and releases it when you want it to. This neat little trick can come in handy when you want to send headers from within your HTML, since the headers are ordered to be released before the HTML when OB is used.
OB can gzip compress the HTML buffered, so you can put this above everything in your PHP file:
if (substr_count($_SERVER["HTTP_ACCEPT_ENCODING"], "gzip"))
ob_start("ob_gzhandler");
else
ob_start();
It checks whether gzip is supported, and then starts output buffering with gzip compression. To release all compressed output you can put this at the bottom of the PHP file:
ob_end_flush();
This flushes all output to the browser.
Errors about disks being full is always a pain. With normal use of the system, you never know what takes up the most space, which you must if you want to clean up your system efficiently.
Unfortunately there's no command to locate your largest files or directories on a Linux system. Though piping a few commands can easily help you get the list of files and directories you want.
- du : Checking size on files or directories
- sort : Sorting lines of given data
- head : Limit output to the first part of the original output
So this is what you can enter if you want to know the top 10 of the largest files and / or directories on your Linux system. Du checks size (-a for all files and dirs), sort sorts the data it receives from du (-n for numeric sorting, -r for reversing the sort), and throws it at head, which takes the top 10 and shows it.
du -a /var | sort -n -r | head -n 10
Unfortunately there's no command to locate your largest files or directories on a Linux system. Though piping a few commands can easily help you get the list of files and directories you want.
- du : Checking size on files or directories
- sort : Sorting lines of given data
- head : Limit output to the first part of the original output
So this is what you can enter if you want to know the top 10 of the largest files and / or directories on your Linux system. Du checks size (-a for all files and dirs), sort sorts the data it receives from du (-n for numeric sorting, -r for reversing the sort), and throws it at head, which takes the top 10 and shows it.
du -a /var | sort -n -r | head -n 10
Deleting old files in Linux is often necessary. Often logs need to be periodically removed for example. Accomplishing this through bash scripts is a nuisance. Luckily there is the Find utility, it allows a few very interesting arguments, one of them is executing a command when a file is found. This argument can be used to call rm, thus, enabling you to remove what you find. Another argument Find allows can specify a time in which should be searched. This way you can delete files older than 10 days, or older than 30 minutes. A combination of these arguments can be used to do what we want.
First off, we need to find files older than, for example, 10 days.
find /var/log -mtime +10
You can also find files older than, say, 30 minutes:
find /tmp -mmin +30
Another argument Find accepts is executing commands when it finds something. You can remove files older than x days like this:
find /path/* -mtime +5 -exec rm {} \;
The " {} " represents the file found by Find, so you can feed it to rm. The " \; " ends the command that needs to be executed, which you need to unless you want errors like:
find: missing argument to `-exec`
Like i said, {} represents the file. You can do anything with a syntax like this. You can also move files around with Find:
find ~/projects/* -mtime +14 -exec mv {} ~/old_projects/ \;
Which effectively moves the files in ~/projects to ~/old_projects when their older than 14 days.
First off, we need to find files older than, for example, 10 days.
find /var/log -mtime +10
You can also find files older than, say, 30 minutes:
find /tmp -mmin +30
Another argument Find accepts is executing commands when it finds something. You can remove files older than x days like this:
find /path/* -mtime +5 -exec rm {} \;
The " {} " represents the file found by Find, so you can feed it to rm. The " \; " ends the command that needs to be executed, which you need to unless you want errors like:
find: missing argument to `-exec`
Like i said, {} represents the file. You can do anything with a syntax like this. You can also move files around with Find:
find ~/projects/* -mtime +14 -exec mv {} ~/old_projects/ \;
Which effectively moves the files in ~/projects to ~/old_projects when their older than 14 days.
Subscribe to:
Posts (Atom)