Because the 'zoo keeps many operations such as Graphic Design and Marketing in-house it generates a considerable amount of data on a day-to-day basis. It's a challenge to keep all this centralised and backed up. What I have done to achieve ample storage with basic redundancy is use a 'front end' NAS (Network Attached Storage) combined with a 'back end' NAS located elsewhere from the front end serving as the primary backup/archive.
The users access the front end NAS directly and generally work from its shares. This will change in future as I intend to access it as a iSCSI mount on a server. This NAS is a standard box housing JBOD and runs OpenFiler - you have probably read about it here earlier. The performance of this NAS is fairly ordinary but since its the network that presents the bottleneck its not something to be concerned about at this stage.
The back end NAS is a purpose built NAS from a company called Thecus, the N5200. It houses up to 5 SATA disks and supports RAIDs 0 thru to 10. I've set up this particular one with 5 x 750GB disks with RAID5. This provides enough space to backup the front end NAS at maximum capacity - about 2.5TB total.
I'm currently backing up the front end NAS via rsync to the Thecus. I had to find the rsync 'module' to install on the Thecus first as it doesn't support it by default however it wasn't a difficult process.
I will consider a Thecus 1U4500 NAS to go with a future Novell OES2 server - mounting it as an iSCSI volume for localised e-mail/data archiving. This will probably use another 5200 for backups.
Overall this provides us with a sizable storage pool at a very reasonable cost. I would like to implement a proper SAN however our needs aren't that great at this stage and a single form of redundancy appears to be acceptable to management. I'll always plan for the upgrade though.
Saturday, December 01, 2007
Thursday, November 29, 2007
OpenDNS Media Release
Thought I'd link this here:
Australia Zoo Conserves Bandwidth, Enjoys 100 Percent Network Uptime with OpenDNS
Australia Zoo Conserves Bandwidth, Enjoys 100 Percent Network Uptime with OpenDNS
Saturday, November 24, 2007
Now we're getting somewhere
It has been a hectic few weeks for me. Major projects for the period include the preparations for Steve Irwin Day, the roll out of the new Telstra Next IP managed WAN and the slowly but surely deployment of a campus wireless network.
I'm also feeding in various nifty services into the Zoo network - such as a OpenFire Jabber server, Twiki wiki, One or Zero Helpdesk and a few other things. Although I'm going to hold off on their deployment to users until I have a LDAP directory of some sort in place for all these things to authenticate against.
I updated the firmware on my Sony Ericsson P1i too - the difference in performance and stability is night and day - and it was pretty good to begin with! The Opera browser and unified messaging apps have been improved quite a bit. I'm actually encoding TopGear episodes to 3gp format on my MythTV box and watching them during my lunch breaks on it - I didn't think I'd be using it like that. nb: I could just watch the xvid/dvix encoded eps but they're a tad large...
Steve Irwin Day
Steve Irwin Day went well as far as the web servers went - they handled a doubling of traffic without a hitch. I will be sad to see the replication servers go, they've done their job well and I'm kinda proud of them. I'm starting an upgrade of massive proportions of the two main web servers this week - hopefully once I'm done they'll be more than capable of handling the load without the need for replicas. More on that later.
Telstra NextIP
I've cut everything over to the zoo's shiny new Telstra Next IP WAN (to use their marketing spin). Speeds are good, response times are awesome. I'm also using Telstra's Proxy Caches too as they're very snappy and are used by many - plus there's a discount on data used through them apparently.
As part of the WAN, each SHDSL connected site has a managed Cisco 1801 router and SHDSL TA, some yumcha device. The NextG connection is as per usual, but when it is connected it has a L2TP into the WAN and thus has access to all the same routes as the other sites. I've set up Mikrotik routers at each site including the NextG connection - the Zoo has 2 x Yawarra WRAP1-2s in a rack enclosure still, Mooloolaba has a Yawarra WRAP1-1 with wireless and WhaleOne has a Mikrotik RB133 in an indoor enclosure.
Campus Wireless
Not too much progress - the Admissions indoors area now as its own AP and the Taj (crocosium buidling) offices have coverage too. I'm waiting for a sparky to run new cabling to key points so I can locate a few more APs in good coverage areas. Also waiting on a $1k order for various bits and pieces from the RFShop so I can start making up tails and prepare the splitters for installation. Also getting some antennas from them that are cheap and appear to have excellent performance - looking forward to trying them out.
I'm also feeding in various nifty services into the Zoo network - such as a OpenFire Jabber server, Twiki wiki, One or Zero Helpdesk and a few other things. Although I'm going to hold off on their deployment to users until I have a LDAP directory of some sort in place for all these things to authenticate against.
I updated the firmware on my Sony Ericsson P1i too - the difference in performance and stability is night and day - and it was pretty good to begin with! The Opera browser and unified messaging apps have been improved quite a bit. I'm actually encoding TopGear episodes to 3gp format on my MythTV box and watching them during my lunch breaks on it - I didn't think I'd be using it like that. nb: I could just watch the xvid/dvix encoded eps but they're a tad large...
Steve Irwin Day
Steve Irwin Day went well as far as the web servers went - they handled a doubling of traffic without a hitch. I will be sad to see the replication servers go, they've done their job well and I'm kinda proud of them. I'm starting an upgrade of massive proportions of the two main web servers this week - hopefully once I'm done they'll be more than capable of handling the load without the need for replicas. More on that later.
Telstra NextIP
I've cut everything over to the zoo's shiny new Telstra Next IP WAN (to use their marketing spin). Speeds are good, response times are awesome. I'm also using Telstra's Proxy Caches too as they're very snappy and are used by many - plus there's a discount on data used through them apparently.
As part of the WAN, each SHDSL connected site has a managed Cisco 1801 router and SHDSL TA, some yumcha device. The NextG connection is as per usual, but when it is connected it has a L2TP into the WAN and thus has access to all the same routes as the other sites. I've set up Mikrotik routers at each site including the NextG connection - the Zoo has 2 x Yawarra WRAP1-2s in a rack enclosure still, Mooloolaba has a Yawarra WRAP1-1 with wireless and WhaleOne has a Mikrotik RB133 in an indoor enclosure.
Campus Wireless
Not too much progress - the Admissions indoors area now as its own AP and the Taj (crocosium buidling) offices have coverage too. I'm waiting for a sparky to run new cabling to key points so I can locate a few more APs in good coverage areas. Also waiting on a $1k order for various bits and pieces from the RFShop so I can start making up tails and prepare the splitters for installation. Also getting some antennas from them that are cheap and appear to have excellent performance - looking forward to trying them out.
Tuesday, November 06, 2007
Symbol/Motorola WS5100
If you use these Wireless switches and are still running pre-3.0 firmware, UPDATE! Huge changes made and I suspect it's Motorola weaving its magic. I had all sorts of issues running with Spectralink VoWiFi sets and coverage - updated firmware to 3.0.2.0 and everything is happy now.
Other benefits of the firmware is that the CLI now mimics Cisco's IOS in many ways and the Java/Web interface is greatly improved - information is readily available and the controls make sense...
It's changed my view on this kit I was almost about to turf it in exchange for some Cisco gear or even Mikrotik (but I didn't really want to configure each AP individually).
Other benefits of the firmware is that the CLI now mimics Cisco's IOS in many ways and the Java/Web interface is greatly improved - information is readily available and the controls make sense...
It's changed my view on this kit I was almost about to turf it in exchange for some Cisco gear or even Mikrotik (but I didn't really want to configure each AP individually).
Wednesday, September 05, 2007
Saturday, August 25, 2007
Recording RouterOS's IP Accounting Data
There are a number of ways to gather data from a Mikrotik RouterOS based router. The easiest would be it's 'Accounting Web Access' feature where you can go to http://routeros_addr/accounting/ip.cgi and view a list of ip pairs similar to a basic netflow output.
Using this feature I wrote the below perl scripts to collect the data into a DB file. To keep things reasonable I set it record the data per the hour, meaning my smallest unit of measurement is hourly. While I could have simply used a MySQL database to dump the data into, I wanted to maintain a level of portability and simplicity - it sucks having to install/configure/run a fully fledged RDBMS just to view some basic data usage statistics.
The first script is used to gather the data from the MT router and store it into the db_file, the second script uses GD::Graph to produce bar charts using the data stored in the db_file. I'll be writing more scripts that dumps the contents of the db_file into a .xls spreadsheet for manual reports - handy for tracking down heavy users and to use as evidence if there are any ISP account discrepancies.
Apologies for the untidy code and the lack of formatting. Blogger doesn't provide any 'code markup' function and I cbf'd looking for alternatives. I'll fix it up when I can.
Example graph output (graph.pl 8 hours):
gather.pl:
graph.pl:
Using this feature I wrote the below perl scripts to collect the data into a DB file. To keep things reasonable I set it record the data per the hour, meaning my smallest unit of measurement is hourly. While I could have simply used a MySQL database to dump the data into, I wanted to maintain a level of portability and simplicity - it sucks having to install/configure/run a fully fledged RDBMS just to view some basic data usage statistics.
The first script is used to gather the data from the MT router and store it into the db_file, the second script uses GD::Graph to produce bar charts using the data stored in the db_file. I'll be writing more scripts that dumps the contents of the db_file into a .xls spreadsheet for manual reports - handy for tracking down heavy users and to use as evidence if there are any ISP account discrepancies.
Apologies for the untidy code and the lack of formatting. Blogger doesn't provide any 'code markup' function and I cbf'd looking for alternatives. I'll fix it up when I can.
Example graph output (graph.pl 8 hours):
gather.pl:
#!/usr/bin/perl -w
use strict;
use LWP::Simple;
use MLDBM 'DB_File';
use Time::Local;
my $arg0 = $ARGV[0];
my $arg1 = $ARGV[1];
my $ip_accounting_url="http://<routeros ip>/accounting/ip.cgi";
my $accounting_mldbm_data_db = "~/accounting_data.mldbm";
tie my %h, 'MLDBM', $accounting_mldbm_data_db or die $!;
my ($timestamp) = &time_stamp();
my $epoch = time();
# print "\n Epoch set to: $epoch\n";
&gather_ip_accounting($ip_accounting_url);
sub gather_ip_accounting {
my $url = $_[0];
my ($src, $dst, $bytes, $packets, $src_usr, $dst_usr);
foreach my $line (split(/\n/, get($url))) {
($src, $dst, $bytes, $packets, $src_usr, $dst_usr) = split(" ", $line);
if ($dst && $dst =~ /(192\.168\.)|(10\.2\.)|(172\.16\.)/){
my $h_dst = $h{$dst . "_" . $timestamp};
$h_dst->{dst} = $dst;
# $h_dst->{src} = $src;
$h_dst->{bytes} += $bytes;
$h_dst->{packets} += $packets;
# $h_dst->{src_usr} = $src_usr;
$h_dst->{dst_usr} = $dst_usr;
$h_dst->{epoch} = $epoch;
$h{$dst . "_" . $timestamp} = $h_dst;
}
}
}
# use Data::Dumper;
# print Dumper(%h);
untie %h;
sub time_stamp {
my ($d_t);
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$year += 1900;
$mon++;
$d_t = sprintf("%4d-%2.2d-%2.2d %2.2d:00:00",$year,$mon,$mday,$hour,$min,$sec);
return($d_t);
}
graph.pl:
#!/usr/bin/perl -w
use strict;
use LWP::Simple;
use MLDBM 'DB_File';
use Time::Local;
use GD::Graph::bars;
my ($num_values, $period_type);
if ($ARGV[0] && $ARGV[1]) {
if ($ARGV[0] =~ /\d+/) {
$num_values = $ARGV[0];
}
else {
print "\nIncorrect value supplied for number of units\n";
exit;
}
if ($ARGV[1] =~ /(hours)|(days)|(months)/) {
$period_type = $ARGV[1];
}
else {
print "\nIncorrect value supplied for type of units\n";
exit;
}
}
else {
print "\nUsage: period units\nPeriod: The number of values\nUnits: Hours, Days, Months\n\n";
exit;
}
print "\n Gathering $num_values $period_type worth of data from db!\n";
my $epoch = time();
my $accounting_mldbm_data_db = "~/accounting_data.mldbm";
my $graph_image_file = "~/accounting_data_" . $num_values . "_" . $period_type . "_" . $epoch . ".png";
tie my %h, 'MLDBM', $accounting_mldbm_data_db or die $!;
#else {
# &print_period_summary($arg0, $arg1);
#}
my($graphvalues, @graphvalues_tmp);
my $period_total = 0;
my $i = 0;
while ($i <= $num_values) {
# print "$i\n";
@graphvalues_tmp = &print_total($i, $period_type);
my $data = $graphvalues_tmp[0];
my $epoch = $graphvalues_tmp[1];
my $HMS = &epoch_to_MDHMS($epoch);
push @{$graphvalues->[0]}, $HMS;
push @{$graphvalues->[1]}, $data;
$period_total = $period_total + $data;
$i++;
}
my $graph = GD::Graph::bars->new(85*$num_values, 300);
$graph->set(
x_label => "$period_type (latest towards the left) Period Total: $period_total",
y_label => 'Mbytes',
title => "Total Mbytes (Over $num_values $period_type)",
transparent => '0',
show_values => '1',
bar_spacing => '2',
) or warn $graph->error;
my $image = $graph->plot($graphvalues) or die $graph->error;
open(IMG, ">$graph_image_file") or die $!;
binmode IMG;
print IMG $image->png;
# use Data::Dumper;
# print Dumper($graphvalues);
untie %h;
sub print_total {
my $h_total=0;
my ($h_row, $h_column, $h_bytes, $h_dst);
my ($num, $period) = @_;
my ($epoch_start, $epoch_end) = &epoch_period($num, $period);
for my $h_row ( keys %h ) {
if ($h{$h_row}{epoch} >= $epoch_start && $h{$h_row}{epoch} <= $epoch_end) { $h_bytes = $h{$h_row}{bytes}; $h_dst = $h{$h_row}{dst}; $h_total = $h_total + $h_bytes; } } my $formatted_total = sprintf("%.3f", $h_total/1024/1024); return($formatted_total, $epoch_start); } sub epoch_period { my ($past_count, $period) = @_; my ($epoch_period_start, $epoch_period_end); my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time); my ($start_hour, $end_hour); if ($period eq "hours") { $start_hour = $hour-$past_count; $end_hour = $hour-$past_count; } elsif ($period eq "days") { $mday = $mday-$past_count; $start_hour = '00'; $end_hour = '23'; } elsif ($period eq "months") { $mon = $mon-$past_count; # $mday = '00'; # $hour = '00'; } $epoch_period_start = timelocal(00,00,$start_hour,$mday,$mon,$year); print "Start: $epoch_period_start\n"; $epoch_period_end = timelocal(59,59,$end_hour,$mday,$mon,$year); print "End: $epoch_period_end\n"; # print "SUB EPOCH_PERIOD: $epoch_period_start, $epoch_period_end\n"; return($epoch_period_start, $epoch_period_end); } sub epoch_to_MDHMS { my $epoch = $_[0]; my ($sec, $min, $hour, $mday, $mon) = (localtime($epoch))[0,1,2,3,4]; my $mdhms = $mon+1 . "-" . $mday . " " . sprintf("%02d", $hour) . ":" . sprintf("%02d", $min) . ":" . sprintf("%02d", $sec); return($mdhms); }
Saturday, August 11, 2007
A simple .forward vacation enable/disable script
I got a little tired of manually enabling/disabling peoples vacation AutoReply. So I decided to knock out a simple bash script that does the enable/disable part, leaving me to simply make sure the actual response message was updated and just AT the script for whenever they wanted to leave/come back.
I used to move .forward to dotforward and back when enabling/disabling - so if you're wondering why I'm referencing files called 'dotforward' it's for backwards compatibility - plus I like the idea of setting up dotforward if the user doesn't have any .forward yet and leaving the rest up to the script.
#!/bin/bash
DATE=`date`
TMP_DATE=`date +%Y%m%d`
EMAIL_SUBJECT="AutoReply Status"
EMAIL_FROM="blah@blah.com"
USER="$1"
FORWARD="/home/$USER/.forward"
DOTFORWARD="/home/$USER/dotforward"
VACATION=$(which vacation)
if [ -z "$1" ]; then
echo "usage: $0username"
exit
fi
echo "Doing the .forward thing with user: $USER"
if ! [ -e $FORWARD ]; then
echo "No .forward found, is there a dotforward?"
if [ -e $DOTFORWARD ]; then
echo "Found $DOTFORWARD, moving it to $FORWARD"
EMAIL_BODY="Hello $USER, I have enabled your AutoReply E-Mail as of $DATE"
mv $DOTFORWARD $FORWARD
echo "Moved $DOTFORWARD to $FORWARD"
fi
else
echo "Hmm, there's already a $FORWARD, I'll just add or remove the vacation reference..."
if [ -e $FORWARD ]; then
if grep "vacation" $FORWARD
then echo "Oooh I found a vacation reference in here! Let's DELETE it buwahaha"
sed -e "s!\"|$VACATION $USER\"!!g" $FORWARD > /tmp/$USER_forward-$TMP_DATE
mv /tmp/$USER_forward-$TMP_DATE $FORWARD
EMAIL_BODY="Hello $USER, I have disabled your AutoReply E-Mail as of $DATE"
else
echo "Didn't find any vacation reference, I'm adding one"
if ! grep "\\$USER," $FORWARD; then
echo "\\$USER," >> $FORWARD
fi
echo " \"|$VACATION $USER\"" >> $FORWARD
EMAIL_BODY="Hello $USER, I have enabled your AutoReply E-Mail as of $DATE"
fi
fi
fi
echo "$FORWARD now looks like:"
echo `cat $FORWARD`
echo "$EMAIL_BODY" | mail -s "$SUBJECT" $USER
I used to move .forward to dotforward and back when enabling/disabling - so if you're wondering why I'm referencing files called 'dotforward' it's for backwards compatibility - plus I like the idea of setting up dotforward if the user doesn't have any .forward yet and leaving the rest up to the script.
#!/bin/bash
DATE=`date`
TMP_DATE=`date +%Y%m%d`
EMAIL_SUBJECT="AutoReply Status"
EMAIL_FROM="blah@blah.com"
USER="$1"
FORWARD="/home/$USER/.forward"
DOTFORWARD="/home/$USER/dotforward"
VACATION=$(which vacation)
if [ -z "$1" ]; then
echo "usage: $0
exit
fi
echo "Doing the .forward thing with user: $USER"
if ! [ -e $FORWARD ]; then
echo "No .forward found, is there a dotforward?"
if [ -e $DOTFORWARD ]; then
echo "Found $DOTFORWARD, moving it to $FORWARD"
EMAIL_BODY="Hello $USER, I have enabled your AutoReply E-Mail as of $DATE"
mv $DOTFORWARD $FORWARD
echo "Moved $DOTFORWARD to $FORWARD"
fi
else
echo "Hmm, there's already a $FORWARD, I'll just add or remove the vacation reference..."
if [ -e $FORWARD ]; then
if grep "vacation" $FORWARD
then echo "Oooh I found a vacation reference in here! Let's DELETE it buwahaha"
sed -e "s!\"|$VACATION $USER\"!!g" $FORWARD > /tmp/$USER_forward-$TMP_DATE
mv /tmp/$USER_forward-$TMP_DATE $FORWARD
EMAIL_BODY="Hello $USER, I have disabled your AutoReply E-Mail as of $DATE"
else
echo "Didn't find any vacation reference, I'm adding one"
if ! grep "\\$USER," $FORWARD; then
echo "\\$USER," >> $FORWARD
fi
echo " \"|$VACATION $USER\"" >> $FORWARD
EMAIL_BODY="Hello $USER, I have enabled your AutoReply E-Mail as of $DATE"
fi
fi
fi
echo "$FORWARD now looks like:"
echo `cat $FORWARD`
echo "$EMAIL_BODY" | mail -s "$SUBJECT" $USER
Wednesday, August 08, 2007
Mikrotik RouterOS Firewall Script
The following will hunt through the firewall filter list and enable/disable all rules whose comment is "Drop_Toggle". Usefull if you want to toggle particular sets of filters periodically etc.
# Enable Drop Rules
:global list ""; :foreach i in [/ip firewall filter find] \
do={:if ([:find [/ip firewall filter get $i comment] "Drop_Toggle"]=0) \
do={/ip firewall filter set $i disabled=no} };
# Disable Drop Rules
:global list ""; :foreach i in [/ip firewall filter find] \
do={:if ([:find [/ip firewall filter get $i comment] "Drop_Toggle"]=0) \
do={/ip firewall filter set $i disabled=yes}};
# Enable Drop Rules
:global list ""; :foreach i in [/ip firewall filter find] \
do={:if ([:find [/ip firewall filter get $i comment] "Drop_Toggle"]=0) \
do={/ip firewall filter set $i disabled=no} };
# Disable Drop Rules
:global list ""; :foreach i in [/ip firewall filter find] \
do={:if ([:find [/ip firewall filter get $i comment] "Drop_Toggle"]=0) \
do={/ip firewall filter set $i disabled=yes}};
Monday, August 06, 2007
Mirroring a Plesk vhost script
The following script will mirror a vhost from a Plesk managed server. It is up to you to modify the Apache vhost configuration includes (usually there's one created by Plesk in /etc/httpd/conf.d or the like).
#!/bin/bash
# RSYNC/SED script to mirror a Plesk host
# 20070801 Ben Johns
# Requirements:
# SSH Pub/Priv keys shared on both hosts
# sshkeygen -t dsa -b 1024 -f `whoami`-`hostname` (NO PASSPHRASE!)
# copy the resultant .pub file to the remote host and append it too
# the RSYNC_USER's .ssh/authorized_keys file.
# RSYNC Version >2.6.3
# HTTPD.INCLUDE needs to be manually configured to suit the config
# of the local host. Ie copy the relevant sections from the remote hosts
# plesk httpd conf to this host. Usually done somewhere in /etc/httpd or /etc/apache.
# REM_HOST: The remote host to mirror
# RSYNC_USER: The user account on the remote host that has permission
# to copy the intended files.
# RSYNC_OPTS: Parameters to use with the rsync command
# SSH_KEY: The private DSA key to use for SSH authentication
# RSYNC_VHOST_SRC_PATH: Path to the source virtual host files on the remote host
# RSYNC_VHOST_SRC_DIR: Directory of the source virtual host files on the remote host
# RSYNC_VHOST_DST_PATH: Path to the destination on the local host
# SED_VHOST_MOD_FILE: Location of the SED parameters to modify VHOST config files
REM_HOST="web.server.com"
RSYNC_USER="rsync"
RSYNC_OPTS="-avz --perms -q --deleteduring"
SSH_KEY="/var/www/rsync_ssh_key"
RSYNC_VHOST_SRC_PATH="/home/httpd/vhosts/"
RSYNC_VHOST_SRC_DIR="vhost_directory"
RSYNC_VHOST_DST_PATH="/var/www/vhosts/"
SED_VHOST_MOD_FILE="/var/www/vhost_include.sed"
rsync $RSYNC_OPTS \
-e "ssh -i $SSH_KEY -l $RSYNC_USER" \ $RSYNC_USER@$REM_HOST:$RSYNC_VHOST_SRC_PATH$RSYNC_VHOST_SRC_DIR $RSYNC_VHOST_DST_PATH
rsync $RSYNC_OPTS --include "*/" --include "*.include" --exclude "*" \
-e "ssh -i $SSH_KEY -l $RSYNC_USER" \
$RSYNC_USER@$REM_HOST:$RSYNC_VHOST_SRC_PATH$RSYNC_VHOST_SRC_DIR $RSYNC_VHOST_DST_PATH
for file in $RSYNC_VHOST_DST_PATH$RSYNC_VHOST_SRC_DIR/conf/httpd.include ; do
sed -f $SED_VHOST_MOD_FILE "$file" > tmp_file
mv tmp_file "$file"
echo "Modified $file"
done
chmod ug+rwx R $RSYNC_VHOST_DST_PATH$RSYNC_VHOST_SRC_DIR
apache2ctl graceful
#!/bin/bash
# RSYNC/SED script to mirror a Plesk host
# 20070801 Ben Johns
# Requirements:
# SSH Pub/Priv keys shared on both hosts
# sshkeygen -t dsa -b 1024 -f `whoami`-`hostname` (NO PASSPHRASE!)
# copy the resultant .pub file to the remote host and append it too
# the RSYNC_USER's .ssh/authorized_keys file.
# RSYNC Version >2.6.3
# HTTPD.INCLUDE needs to be manually configured to suit the config
# of the local host. Ie copy the relevant sections from the remote hosts
# plesk httpd conf to this host. Usually done somewhere in /etc/httpd or /etc/apache.
# REM_HOST: The remote host to mirror
# RSYNC_USER: The user account on the remote host that has permission
# to copy the intended files.
# RSYNC_OPTS: Parameters to use with the rsync command
# SSH_KEY: The private DSA key to use for SSH authentication
# RSYNC_VHOST_SRC_PATH: Path to the source virtual host files on the remote host
# RSYNC_VHOST_SRC_DIR: Directory of the source virtual host files on the remote host
# RSYNC_VHOST_DST_PATH: Path to the destination on the local host
# SED_VHOST_MOD_FILE: Location of the SED parameters to modify VHOST config files
REM_HOST="web.server.com"
RSYNC_USER="rsync"
RSYNC_OPTS="-avz --perms -q --deleteduring"
SSH_KEY="/var/www/rsync_ssh_key"
RSYNC_VHOST_SRC_PATH="/home/httpd/vhosts/"
RSYNC_VHOST_SRC_DIR="vhost_directory"
RSYNC_VHOST_DST_PATH="/var/www/vhosts/"
SED_VHOST_MOD_FILE="/var/www/vhost_include.sed"
rsync $RSYNC_OPTS \
-e "ssh -i $SSH_KEY -l $RSYNC_USER" \ $RSYNC_USER@$REM_HOST:$RSYNC_VHOST_SRC_PATH$RSYNC_VHOST_SRC_DIR $RSYNC_VHOST_DST_PATH
rsync $RSYNC_OPTS --include "*/" --include "*.include" --exclude "*" \
-e "ssh -i $SSH_KEY -l $RSYNC_USER" \
$RSYNC_USER@$REM_HOST:$RSYNC_VHOST_SRC_PATH$RSYNC_VHOST_SRC_DIR $RSYNC_VHOST_DST_PATH
for file in $RSYNC_VHOST_DST_PATH$RSYNC_VHOST_SRC_DIR/conf/httpd.include ; do
sed -f $SED_VHOST_MOD_FILE "$file" > tmp_file
mv tmp_file "$file"
echo "Modified $file"
done
chmod ug+rwx R $RSYNC_VHOST_DST_PATH$RSYNC_VHOST_SRC_DIR
apache2ctl graceful
Sunday, July 29, 2007
Planning for high traffic
The most worrying project on my list at this time is working out how to achieve as much grunt as possible to withstand the estimated traffic from the first Steve Irwin day tribute website since his passing.
Currently the Zoo has a single primary server hosting all sites and a secondary web server sharing the load on a few sites. It's not the perfect model by far and I intend on tidying it up as follows.
What I plan to do is install MySQL 5.x on the secondary server and ready it for replication. Then I will dump the contents from the existing MySQL 4.x databases and point all the websites at it. Once I'm satisfied that it's functioning as expected (I'm in the process of testing this on the bench). Then I will upgrade the MySQL 4.x to 5.x on the primary server and begin multi-master replication with the other. This completes stage one of the preparations.
With the databases in place and functioning, I will work out how to replicate all vhosts between the two servers. Plesk, the web control panel operating on both servers, makes this more complex than it really needs to be since I will also need to create the client/domain accounts for each of the replicated sites. When I have worked out what is required I will script the synchronization as much as possible and hopefully end up with near 100% automation. I may need to tap into Plesk's API to do this. This will complete stage two.
With replication of both the databases and general structures taking place between the two existing servers I can now introduce more servers and the greater complexity they will bring.
I will be working towards four application servers just for static content/scripts and a single localised database server. I intend to just have raw boxes running either Redhat or FreeBSD, no fancy control panels getting in the way. This will allow me to script everything with simplicity and provide a basic configuration to each server. I will replicate the data from the first of the existing servers to one of the new application servers and from there to each of the remaining three. This is so I can keep the amount of public traffic between the servers to a minimum. I will introduce the new database server into the multi-master replication loop and point the four new application servers at it. This completes stage three.
Once I am satisfied that each application server is connecting to the database server over their private network and that the database server is successfully replicating the databases from the existing two servers I will set up the load balancer to include the four new application servers. We may need to cut over to a new load balancer since the existing one may not support this many servers.
This setup will provide me with six front end servers and three database servers - with a bit of sharing of resources here and there. The following diagram shows what I intend on achieving.
Relevant links:
ONLamp Advanced MySQL Replication Techniques
MySQL 5.0 Manual - Replication
Rsync
SWSoft Plesk - Upgrading MySQL
Currently the Zoo has a single primary server hosting all sites and a secondary web server sharing the load on a few sites. It's not the perfect model by far and I intend on tidying it up as follows.
What I plan to do is install MySQL 5.x on the secondary server and ready it for replication. Then I will dump the contents from the existing MySQL 4.x databases and point all the websites at it. Once I'm satisfied that it's functioning as expected (I'm in the process of testing this on the bench). Then I will upgrade the MySQL 4.x to 5.x on the primary server and begin multi-master replication with the other. This completes stage one of the preparations.
With the databases in place and functioning, I will work out how to replicate all vhosts between the two servers. Plesk, the web control panel operating on both servers, makes this more complex than it really needs to be since I will also need to create the client/domain accounts for each of the replicated sites. When I have worked out what is required I will script the synchronization as much as possible and hopefully end up with near 100% automation. I may need to tap into Plesk's API to do this. This will complete stage two.
With replication of both the databases and general structures taking place between the two existing servers I can now introduce more servers and the greater complexity they will bring.
I will be working towards four application servers just for static content/scripts and a single localised database server. I intend to just have raw boxes running either Redhat or FreeBSD, no fancy control panels getting in the way. This will allow me to script everything with simplicity and provide a basic configuration to each server. I will replicate the data from the first of the existing servers to one of the new application servers and from there to each of the remaining three. This is so I can keep the amount of public traffic between the servers to a minimum. I will introduce the new database server into the multi-master replication loop and point the four new application servers at it. This completes stage three.
Once I am satisfied that each application server is connecting to the database server over their private network and that the database server is successfully replicating the databases from the existing two servers I will set up the load balancer to include the four new application servers. We may need to cut over to a new load balancer since the existing one may not support this many servers.
This setup will provide me with six front end servers and three database servers - with a bit of sharing of resources here and there. The following diagram shows what I intend on achieving.
Relevant links:
ONLamp Advanced MySQL Replication Techniques
MySQL 5.0 Manual - Replication
Rsync
SWSoft Plesk - Upgrading MySQL
Sunday, June 24, 2007
What have I been doing?
Been a while since I last posted so time for a update post!
Revamping the old network
The network at the Zoo was a miserable mess - it took me a while to audit what was in place and to devise a topology that would best address the current and future needs of the Zoo. Now the Zoo has a VLAN'd network consisting of dedicated Administration, Point-of-Service and VoIP subnets, OSPF routing at the core, a DMZ, traffic policing and shaping capabilities and VPN (PPTP/L2TP and IPSec) capabilities.
I achieved all this by using two rackmounted WRAP1-2's from Yawarra and a cheap Asus GigaX 2024 switch. I loaded Mikrotik RouterOS 2.9.42 onto the two WRAP1-2's and set up 802.1q VLANs on the switch. The VLANs are routed on the first WRAP1-2 which then connects onto the DMZ where the other WRAP1-2 and Cisco 857/877 routers exist with OSPF routing throughout. The second WRAP1-2 holds up the 1Mbit Unisky wireless connection (PPPoE, over wireless... yuk) and hopefully a substantial fibre based service from someone, such as a 2Mbit E1/G.703 service.
I used pairs of Cisco 8xx series routers to hold up VPN links between the Zoo and its newly opened Mooloolaba retail store. A pair of 857's hold up a general Point-of-Service/LAN traffic IPSec tunnel. In addition to that a pair of 877's hold up a VoIP/Video IPSec tunnel with QoS. The two DSLs are 8Mbit/384Kbit links supplied by Bigpond. Having dedicated 'pairs' of routers/DSL for VPN connectivity is overkill but it's still cheaper than a single fibre service.
These changes provided the framework for the following additions to the network infrastructure.
A new phone system
The Zoo's old Siemens key system was well and truely past its time and was needing upgrades which proved to be exorbitantly costly to do. A new Alcatel OmniPCX PBX was selected and installed by company called Nexon Asia Pacific. Along with the digital and analog extensions a number of VoIP extensions are provided including wireless VoIP sets. Best practice says to establish a dedicated subnet for the PBX/VoIP services to reside within so as to isolate it from the general traffic of the other networks. Having VLAN capability is useful as I can locate the phones nearly anywhere and still keep them within the VoIP subnet. However while the phones support VLANs, they don't want to communicate with the Asus switch.
First it was wireless and whales, now its wireless and... um... elephants?
I will soon have a Zoo wide wireless network built up of Symbol WS5100 and AP300s. These were provided by Barcode Dynamics in addition to inventory/asset tracking equipment. The WS5100 is useful in that it can map VLANs to WLANs - allowing me to simply create wireless extensions of the existing networks with no physical modifications. However security becomes a concern with the absence of a router/firewall - the WS5100 addresses this by supporting WPA1/2, 802.1x and firewall policies. I will also limit transit between the networks and wireless infrastructure via the routers.
To start with the wireless will be used for mobile VoIP. Since the Alcatel mobile sets are basically Spectralink reference designs I can simply apply the pre-configured Spectralink QoS policy on the WS5100 to that WLAN so that it grants expedited access to the wireless bandwidth to VoIP traffic. In the future we will also implement mobile Point-of-Service terminals, either PDA style units or small form factor PCs. There's also the possibility that the roaming photographers could also use the coverage to upload their digital photo's in real-time to the on-site photography lab.
I've just finished setting up a outdoor enclosure for one of the AP300's. It's a pity that the AP300 doesn't have an outdoor variant. The supplied enclosures were just bare boxes, luckily they came with the backing board. However I had to make up the pole brackets myself using some angle brackets, u-bolts and pop-rivets.
Mobile VPN over Telstra's NextG
For the newly launched Whale One vessel the Zoo has established a NextG mobile data service. To connect the boat to this service a ruggedised NextG modem/router was installed on the boat with a 7dBi collinear antenna. The router comes with a PPTP VPN client so I have set this to establish a VPN back to the Zoo. This allows the two Point-of-Service terminals to communicate back to the Zoo's POS services for EFT transactions and accounting/stock control. Under testing we managed to maintain a connection out to 10km to sea and sustain an average data rate of 1.5Mbit/sec. I have yet to test the link with the POS systems running.
Revamping the old network
The network at the Zoo was a miserable mess - it took me a while to audit what was in place and to devise a topology that would best address the current and future needs of the Zoo. Now the Zoo has a VLAN'd network consisting of dedicated Administration, Point-of-Service and VoIP subnets, OSPF routing at the core, a DMZ, traffic policing and shaping capabilities and VPN (PPTP/L2TP and IPSec) capabilities.
I achieved all this by using two rackmounted WRAP1-2's from Yawarra and a cheap Asus GigaX 2024 switch. I loaded Mikrotik RouterOS 2.9.42 onto the two WRAP1-2's and set up 802.1q VLANs on the switch. The VLANs are routed on the first WRAP1-2 which then connects onto the DMZ where the other WRAP1-2 and Cisco 857/877 routers exist with OSPF routing throughout. The second WRAP1-2 holds up the 1Mbit Unisky wireless connection (PPPoE, over wireless... yuk) and hopefully a substantial fibre based service from someone, such as a 2Mbit E1/G.703 service.
I used pairs of Cisco 8xx series routers to hold up VPN links between the Zoo and its newly opened Mooloolaba retail store. A pair of 857's hold up a general Point-of-Service/LAN traffic IPSec tunnel. In addition to that a pair of 877's hold up a VoIP/Video IPSec tunnel with QoS. The two DSLs are 8Mbit/384Kbit links supplied by Bigpond. Having dedicated 'pairs' of routers/DSL for VPN connectivity is overkill but it's still cheaper than a single fibre service.
These changes provided the framework for the following additions to the network infrastructure.
A new phone system
The Zoo's old Siemens key system was well and truely past its time and was needing upgrades which proved to be exorbitantly costly to do. A new Alcatel OmniPCX PBX was selected and installed by company called Nexon Asia Pacific. Along with the digital and analog extensions a number of VoIP extensions are provided including wireless VoIP sets. Best practice says to establish a dedicated subnet for the PBX/VoIP services to reside within so as to isolate it from the general traffic of the other networks. Having VLAN capability is useful as I can locate the phones nearly anywhere and still keep them within the VoIP subnet. However while the phones support VLANs, they don't want to communicate with the Asus switch.
First it was wireless and whales, now its wireless and... um... elephants?
I will soon have a Zoo wide wireless network built up of Symbol WS5100 and AP300s. These were provided by Barcode Dynamics in addition to inventory/asset tracking equipment. The WS5100 is useful in that it can map VLANs to WLANs - allowing me to simply create wireless extensions of the existing networks with no physical modifications. However security becomes a concern with the absence of a router/firewall - the WS5100 addresses this by supporting WPA1/2, 802.1x and firewall policies. I will also limit transit between the networks and wireless infrastructure via the routers.
To start with the wireless will be used for mobile VoIP. Since the Alcatel mobile sets are basically Spectralink reference designs I can simply apply the pre-configured Spectralink QoS policy on the WS5100 to that WLAN so that it grants expedited access to the wireless bandwidth to VoIP traffic. In the future we will also implement mobile Point-of-Service terminals, either PDA style units or small form factor PCs. There's also the possibility that the roaming photographers could also use the coverage to upload their digital photo's in real-time to the on-site photography lab.
I've just finished setting up a outdoor enclosure for one of the AP300's. It's a pity that the AP300 doesn't have an outdoor variant. The supplied enclosures were just bare boxes, luckily they came with the backing board. However I had to make up the pole brackets myself using some angle brackets, u-bolts and pop-rivets.
Mobile VPN over Telstra's NextG
For the newly launched Whale One vessel the Zoo has established a NextG mobile data service. To connect the boat to this service a ruggedised NextG modem/router was installed on the boat with a 7dBi collinear antenna. The router comes with a PPTP VPN client so I have set this to establish a VPN back to the Zoo. This allows the two Point-of-Service terminals to communicate back to the Zoo's POS services for EFT transactions and accounting/stock control. Under testing we managed to maintain a connection out to 10km to sea and sustain an average data rate of 1.5Mbit/sec. I have yet to test the link with the POS systems running.
Sunday, May 20, 2007
Marinanet is coming to Coffs Harbor
In the not so distant future, the beautiful Coffs Harbor marina will become the southern most Marinanet location. It will also be one of the first Marinas to use the new Hotspot system I developed.
This would probably be one of my favorite Marina locations - particularly the most scenic.
No specific installation date set but it is expected to be in the short term.
This would probably be one of my favorite Marina locations - particularly the most scenic.
No specific installation date set but it is expected to be in the short term.
Sunday, May 13, 2007
Attending to the web servers
Web servers are like the wilder beasts of the Internet. I imagine them to be out in the open grass plains happily grazing in the sun. I also can imagine tigers, cheetahs and other predators lurking around the fringes looking for the few that aren't paying attention or have been wounded and thus are falling behind the herd.
The predators are the numerous script kiddies (skiddies) and crackers out there that either trying it on, seeking to find yet another server to host their warez, or they're building a massive, high bandwidth botnet in which to strike down those who oppose them.
When I looked at my new herd of servers (okay, 3 isn't really a herd...) I saw a neglected bunch that needed some tender loving care. So all this week I have been looking at what makes them tick and for what purposes they serve. In the process I've been cutting the fat, optimizing and tightening things up. There's still a ways to go but things are already looking better, especially after upgrading the link from 10Mbit to 100. I hope whoever owned those domains I disabled doesn't get too pissed.
Things that need to happen:
I won't feel comfortable until all that is in place.
The predators are the numerous script kiddies (skiddies) and crackers out there that either trying it on, seeking to find yet another server to host their warez, or they're building a massive, high bandwidth botnet in which to strike down those who oppose them.
When I looked at my new herd of servers (okay, 3 isn't really a herd...) I saw a neglected bunch that needed some tender loving care. So all this week I have been looking at what makes them tick and for what purposes they serve. In the process I've been cutting the fat, optimizing and tightening things up. There's still a ways to go but things are already looking better, especially after upgrading the link from 10Mbit to 100. I hope whoever owned those domains I disabled doesn't get too pissed.
Things that need to happen:
- Consolidate servers into a single rack and connect them via a private LAN
- Adjust load balancing to go between all three servers
- Establish a managed firewall
- Upgrade OS and packages on each
I won't feel comfortable until all that is in place.
Friday, May 11, 2007
Amazing what a bit of tweaking does
Saturday, April 28, 2007
The joys of e-mail administration
I've dabbled in e-mail services for sometime now. It's one of those things that would normally be within the sysadmin's domain but usually falls on the netadmins task list. I think it's something to do with the diagnostic/trouble shooting process - it's pretty much the same as most network issues.
This week was 'fix the mail server' week. Resurrect it and get the mail flowing as it should.
The mail server is a moderately new 'oem' box with average kit and runs CentOS 4. For some reason sendmail is what they have used, personally I have always liked postfix - especially when its combined with policyd.
Sendmail was having a lot of trouble sending e-mail to a few domains that were also fairly popular among the users. I quickly narrowed the problem down to a flaky link causing connections to time out - likely a issue with using PPPoE over wireless and then going though some magical shaping gateway to the 'net. So I set up forwarding to the service providers IronPort mail server - once I had figured out the particulars of getting sendmail to forward via an authenticating mta, the outgoing mail queue was kept nice and empty.
Once that was done, I then focused on the viral aspect of email. It appears the anti-virus in use on the mail server was way out of date and while its defs were up to date, the engine simply couldn't detect many of the popular worms. So I left it as it was and installed ClamAV - it's doing the job fine.
Next was figuring out how they were using procmail to process messages. This is where I discovered, to my displeasure, that they were using procmail to run spamassassin and the anti-virus, along with some basic procmail type spam filtering. What a waste of resources processing mail at the mailbox stage is. So I've shifted those tasks to the MTA where they belong. Procmail is for users to distribute mail among folders and vacation messaging when .forward isn't enough.
So now the server has basic virus and spam filtering abilities once again. Next step is to look at shifting over to postfix and implementing various policy daemons with their grey/white/black listing, SPF, spamtraps, HELO checking and weighted scoring goodness. I will also use amavis-new or xamime to run stuff through a few anti-virus scanners and deal with mail accordingly.
With all these changes I was forced to implement a rather draconian policy of limiting message sizes to 10MB. This was all that the ISPs mail server would accept, not that I disagree - it's e-mail, not FTP... So being the friendly BOFH I had to offer my flock an alternative to send those large files to outside recipients. In comes PaknPost, a http upload/emailer webapp. It's written in perl and free - what more could I ask for? This allows users to send up to ten files to anyone they like, with virus scanning, file encryption and HTTPS transfer. I'm quite happy with the initial results, user abuse will be the ultimate test.
This week was 'fix the mail server' week. Resurrect it and get the mail flowing as it should.
The mail server is a moderately new 'oem' box with average kit and runs CentOS 4. For some reason sendmail is what they have used, personally I have always liked postfix - especially when its combined with policyd.
Sendmail was having a lot of trouble sending e-mail to a few domains that were also fairly popular among the users. I quickly narrowed the problem down to a flaky link causing connections to time out - likely a issue with using PPPoE over wireless and then going though some magical shaping gateway to the 'net. So I set up forwarding to the service providers IronPort mail server - once I had figured out the particulars of getting sendmail to forward via an authenticating mta, the outgoing mail queue was kept nice and empty.
Once that was done, I then focused on the viral aspect of email. It appears the anti-virus in use on the mail server was way out of date and while its defs were up to date, the engine simply couldn't detect many of the popular worms. So I left it as it was and installed ClamAV - it's doing the job fine.
Next was figuring out how they were using procmail to process messages. This is where I discovered, to my displeasure, that they were using procmail to run spamassassin and the anti-virus, along with some basic procmail type spam filtering. What a waste of resources processing mail at the mailbox stage is. So I've shifted those tasks to the MTA where they belong. Procmail is for users to distribute mail among folders and vacation messaging when .forward isn't enough.
So now the server has basic virus and spam filtering abilities once again. Next step is to look at shifting over to postfix and implementing various policy daemons with their grey/white/black listing, SPF, spamtraps, HELO checking and weighted scoring goodness. I will also use amavis-new or xamime to run stuff through a few anti-virus scanners and deal with mail accordingly.
With all these changes I was forced to implement a rather draconian policy of limiting message sizes to 10MB. This was all that the ISPs mail server would accept, not that I disagree - it's e-mail, not FTP... So being the friendly BOFH I had to offer my flock an alternative to send those large files to outside recipients. In comes PaknPost, a http upload/emailer webapp. It's written in perl and free - what more could I ask for? This allows users to send up to ten files to anyone they like, with virus scanning, file encryption and HTTPS transfer. I'm quite happy with the initial results, user abuse will be the ultimate test.
Saturday, April 14, 2007
Another intense week
Today I spent the morning at the new Australia Zoo "On the Beach" shop opening. Basically making sure everything IT wise went smoothly, and that it did, until the afternoon when the main link into the Zoo decided to drop causing them to fail... that was an interesting hour.
I'm endeavoring to maintain the level of client satisfaction that I desire in the given environment. There needs to be changes made to streamline desktop support as much as possible to allow IT to concentrate on how to improve on other services such as telephony and core services. Plus there needs to be time given to proper planning and implementation with the necessary change control procedures. I guess I'm asking to be allowed to take a proactive approach to IT services.
The highlight of the week was seeing a wombat riding in a trolly/cart type thing.
I'm endeavoring to maintain the level of client satisfaction that I desire in the given environment. There needs to be changes made to streamline desktop support as much as possible to allow IT to concentrate on how to improve on other services such as telephony and core services. Plus there needs to be time given to proper planning and implementation with the necessary change control procedures. I guess I'm asking to be allowed to take a proactive approach to IT services.
The highlight of the week was seeing a wombat riding in a trolly/cart type thing.
Wednesday, April 11, 2007
First week at the 'zoo
It's been very intense. Point-of-sales terminal upgrade across the board, working alone on Saturday and planning and configuring various highly technical functions in a extremely short period of time. That's just a few of the many tasks I faced during the first week on the job.
There is much that needs to be done however the time frames for doing so is worrying. For example setting up dual DSL connections for load-balancing with VPNs between Mooloolaba and the 'zoo in a matter of hours isn't something I would like to do often. Other future plans are to upgrade the PBX system incorporating VoIP, site wide Wireless coverage and E-mail services upgrades.
There is also a LOT of tidying up to do of existing services. I will be working on various scenarios on how to address the zoo's requirements while also trying to reduce vulnerabilities, effort and cost.
Another note - I need to brush up on my 'controlling client expectations' exercises.
There is much that needs to be done however the time frames for doing so is worrying. For example setting up dual DSL connections for load-balancing with VPNs between Mooloolaba and the 'zoo in a matter of hours isn't something I would like to do often. Other future plans are to upgrade the PBX system incorporating VoIP, site wide Wireless coverage and E-mail services upgrades.
There is also a LOT of tidying up to do of existing services. I will be working on various scenarios on how to address the zoo's requirements while also trying to reduce vulnerabilities, effort and cost.
Another note - I need to brush up on my 'controlling client expectations' exercises.
Wednesday, April 04, 2007
The end of one saga, the beginning of another
Today marks a turning point in my IT career. I have officially finalized my employment at AccessPlus and tomorrow I will continue my career at the Australia Zoo.
The send off wasn't that extravagant, a simple lunch with Don and Andrew and at the end of the day I said my goodbyes and left without further discussion.
I will continue to have something to do with Marinanet - that's still yet to be decided. I will also continue to consult independently to local businesses and individuals on their wireless/network, linux/bsd, OSS needs. Be it on a purely part time basis.
Given appropriate authority I will continue writing about my work at the 'zoo. I feel it will prove just as interesting, hopefully more, as my work at AccessPlus.
The send off wasn't that extravagant, a simple lunch with Don and Andrew and at the end of the day I said my goodbyes and left without further discussion.
I will continue to have something to do with Marinanet - that's still yet to be decided. I will also continue to consult independently to local businesses and individuals on their wireless/network, linux/bsd, OSS needs. Be it on a purely part time basis.
Given appropriate authority I will continue writing about my work at the 'zoo. I feel it will prove just as interesting, hopefully more, as my work at AccessPlus.
Tuesday, March 27, 2007
Yawarra Eber 220
Finally got some new toys to play with. Three 'application servers' to go over to Curtin University to provide hotspot services to the student residents.
There are three models of the Eber - 210, 220 and 230. I selected the 220 mainly because it uses Intel network adapters (3 x 10/100 and 1 x 1000/100/10). The 230 would have been nice due to its increased grunt and memory support however I wasn't 100% sure if its 4 x 1000/100/10 Realtek adapters would be supported by Mikrotik RouterOS, not the 2.9 version anyway.
Only two small issues with the units. First the screws used to fasten the lid to the unit are easily shredded - they develop some kind of seal against the aluminum lid and combined with the not so hard metal they're made of causes them to be difficult to remove. Although out of 3 units with 5 screws each (top lid) only 4 ended up like this, most were on the one unit. A quick e-mail to Yawarra had this sorted quickly, more screws are coming.
Secondly the location of the Compact Flash card slot is annoying. Located on the bottom of the board - so you have to remove the entire board to plug the CF card in. Removing the board isn't particularly easy since you have to undo the VGA and COM port lugs, 4 mounting screws and unplug 10 LEDs and the mainboard power connector. A trap door style arrangement on the bottom of the unit would fix this nicely.
The Commell LE-564 'single board computer' is an (Embedded Board eXpandable) EBX form factor board based on the Via Eden CLE266 chipset. In this case it utilizes a Via Eden-ESP 533MHz CPU. The board is well made and the components all appear to be over-rated items suitable for hostile environments. The board does have provisioning for a directly attached 5v DC power source however Yawarra have opted for a 12V DC-DC power supply with appropriate 12V 4A regulated power pack. I believe this is to accommodate the power demands of a hard disk. I'm a little fearful of the power supply because its quality doesn't seem to match that of the board, and in my experience power supplies are usually the first item to fail. However I'm sure Yawarra have tested the units thoroughly.
I flashed a 64MB CF card with the latest stable version of RouterOS (2.9.41) and completed the reasonably tedious task of plugging the CF card in to the board. Because I normally just use the console to install and configure everything I didn't bother plugging in the keyboard/mouse PS/2 sockets or a monitor - straight serial into the com port with 9600 1,8,1,none.
Flicked the rocker switch on the front of the unit and watched the console display a typical BIOS POST screen with a memory count in progress (which you can't disable). I tried pressing the delete key to enter the bios configuration with no luck - many interpretations of 'delete' in the console world, so I left it be. Once the POST had completed the usual hardware information screen displayed and that was it... I thought it had crashed or there was something in the bios that caused it to hang while searching for something to boot from (it had automatically detected the CF card as HDD-0).
This prompted me to connect a screen, keyboard and mouse. This is when I discovered that it had infact booted and had started the RouterOS installation - it simply wasn't redirecting the screen output to the console as expected. I think this can be fixed by using its "Universal Console Redirection (UCR)" feature? Once I discovered this I just left it do its thing and then reboot itself - once it had booted the typical RouterOS username/password prompt displayed in the console.
Pleasantly I found that RouterOS discovered all the necessary hardware and was running fine. The only little oddity that I found is that when making a change to one of the three 10/100 ethernet interfaces it would cause things to pause for about 2/3 seconds before continuing on. The gigabit port didn't display this behavior.
I have set it all up as a fully functional hotspot as it will be when installed at Curtin. I haven't yet had a chance to do any bandwidth or system loading tests - I'll be sure to update the blog when I do. However everything has worked out well and no problems like oversized frames/VLAN issues have occurred.
Three Eber 220's
Commell LE-564
Internal view showing 3.5" HDD mounting plate, board and DC-DC power converter
There are three models of the Eber - 210, 220 and 230. I selected the 220 mainly because it uses Intel network adapters (3 x 10/100 and 1 x 1000/100/10). The 230 would have been nice due to its increased grunt and memory support however I wasn't 100% sure if its 4 x 1000/100/10 Realtek adapters would be supported by Mikrotik RouterOS, not the 2.9 version anyway.
Only two small issues with the units. First the screws used to fasten the lid to the unit are easily shredded - they develop some kind of seal against the aluminum lid and combined with the not so hard metal they're made of causes them to be difficult to remove. Although out of 3 units with 5 screws each (top lid) only 4 ended up like this, most were on the one unit. A quick e-mail to Yawarra had this sorted quickly, more screws are coming.
Secondly the location of the Compact Flash card slot is annoying. Located on the bottom of the board - so you have to remove the entire board to plug the CF card in. Removing the board isn't particularly easy since you have to undo the VGA and COM port lugs, 4 mounting screws and unplug 10 LEDs and the mainboard power connector. A trap door style arrangement on the bottom of the unit would fix this nicely.
The Commell LE-564 'single board computer' is an (Embedded Board eXpandable) EBX form factor board based on the Via Eden CLE266 chipset. In this case it utilizes a Via Eden-ESP 533MHz CPU. The board is well made and the components all appear to be over-rated items suitable for hostile environments. The board does have provisioning for a directly attached 5v DC power source however Yawarra have opted for a 12V DC-DC power supply with appropriate 12V 4A regulated power pack. I believe this is to accommodate the power demands of a hard disk. I'm a little fearful of the power supply because its quality doesn't seem to match that of the board, and in my experience power supplies are usually the first item to fail. However I'm sure Yawarra have tested the units thoroughly.
I flashed a 64MB CF card with the latest stable version of RouterOS (2.9.41) and completed the reasonably tedious task of plugging the CF card in to the board. Because I normally just use the console to install and configure everything I didn't bother plugging in the keyboard/mouse PS/2 sockets or a monitor - straight serial into the com port with 9600 1,8,1,none.
Flicked the rocker switch on the front of the unit and watched the console display a typical BIOS POST screen with a memory count in progress (which you can't disable). I tried pressing the delete key to enter the bios configuration with no luck - many interpretations of 'delete' in the console world, so I left it be. Once the POST had completed the usual hardware information screen displayed and that was it... I thought it had crashed or there was something in the bios that caused it to hang while searching for something to boot from (it had automatically detected the CF card as HDD-0).
This prompted me to connect a screen, keyboard and mouse. This is when I discovered that it had infact booted and had started the RouterOS installation - it simply wasn't redirecting the screen output to the console as expected. I think this can be fixed by using its "Universal Console Redirection (UCR)" feature? Once I discovered this I just left it do its thing and then reboot itself - once it had booted the typical RouterOS username/password prompt displayed in the console.
Pleasantly I found that RouterOS discovered all the necessary hardware and was running fine. The only little oddity that I found is that when making a change to one of the three 10/100 ethernet interfaces it would cause things to pause for about 2/3 seconds before continuing on. The gigabit port didn't display this behavior.
I have set it all up as a fully functional hotspot as it will be when installed at Curtin. I haven't yet had a chance to do any bandwidth or system loading tests - I'll be sure to update the blog when I do. However everything has worked out well and no problems like oversized frames/VLAN issues have occurred.
Three Eber 220's
Commell LE-564
Internal view showing 3.5" HDD mounting plate, board and DC-DC power converter
Friday, March 23, 2007
Induction day at the Zoo
Yesterday was pretty much my first day working at Australia Zoo. The whole time was dedicated to introducing us to the operational side of the zoo, what to do, what not to do. It was fairly intense with a lot of information to absorb in one day but it was very useful knowledge none-the-less.
The parts I found most interesting were the health and safety, privacy and security aspects. I haven't been exposed to an organisation that was so much within the public eye before. So simple things like "don't point out visiting celebrities" I would never had considered.
The staff there seem like a happy crowd that will be good to work with. Plus there's a lot of variety in roles and personalities so there won't be any of that small business monotony to contend with. However, being a reasonably large organisation, it will be my best interest to stay away from any gossip/rumors that tend to breed in such environments.
The parts I found most interesting were the health and safety, privacy and security aspects. I haven't been exposed to an organisation that was so much within the public eye before. So simple things like "don't point out visiting celebrities" I would never had considered.
The staff there seem like a happy crowd that will be good to work with. Plus there's a lot of variety in roles and personalities so there won't be any of that small business monotony to contend with. However, being a reasonably large organisation, it will be my best interest to stay away from any gossip/rumors that tend to breed in such environments.
Tuesday, March 20, 2007
Hotspot Client Interface
Idea:
XUL + WISPr Smart Client (iPass style)
In addition to what I've already done with the hotspot interfaces and whatnot - I am thinking about writing a user installable client application that does everything automatically upon discovery of a supported hotspot.
Using the client would also be beneficial for users with particular needs. It could automatically adjust the hotspot to suit a particular application on the users PC. It can provide accessibility options for people with a disability as well because it will basically use whatever they have set in their OS (I guess?).
From the outset XUL looks like it will suit my particular requirements neatly. It supports multiple platform with little fuss, it uses the MDC approach and its based upon standard and future proofed languages/formats.
I'll fiddle around and see what I can come up with.
XUL + WISPr Smart Client (iPass style)
In addition to what I've already done with the hotspot interfaces and whatnot - I am thinking about writing a user installable client application that does everything automatically upon discovery of a supported hotspot.
Using the client would also be beneficial for users with particular needs. It could automatically adjust the hotspot to suit a particular application on the users PC. It can provide accessibility options for people with a disability as well because it will basically use whatever they have set in their OS (I guess?).
From the outset XUL looks like it will suit my particular requirements neatly. It supports multiple platform with little fuss, it uses the MDC approach and its based upon standard and future proofed languages/formats.
I'll fiddle around and see what I can come up with.
Friday, March 16, 2007
Setting up for Production
Things are swinging into production now. Two sites have gone live in 'Test Mode' - once I'm happy all the bugs are gone I'll push them into production and see how it all goes. Here's hoping.
I started to play around with the FreeRADIUS rlm_perl module to see about modifying radius requests/replies before they hit the database. Main reason is to swap those bloody Colubris accounting values. I must be getting good at this Perl stuff because it was stupidly easy. Now I have a single perl script sitting in radius land that shuffles data around - so many possibilities have opened up with this level of control.
Curtin is still a ongoing interest. Agreements have been signed and supposedly the project should be in full swing. And so it should be, they have specified a end of April deadline. However I haven't even received equipment, nor has it even been ordered. Something about leasing it or something - I don't care because if they want me to finish the preparation before I leave then I'll need to see something next week at the latest.
I started to play around with the FreeRADIUS rlm_perl module to see about modifying radius requests/replies before they hit the database. Main reason is to swap those bloody Colubris accounting values. I must be getting good at this Perl stuff because it was stupidly easy. Now I have a single perl script sitting in radius land that shuffles data around - so many possibilities have opened up with this level of control.
Curtin is still a ongoing interest. Agreements have been signed and supposedly the project should be in full swing. And so it should be, they have specified a end of April deadline. However I haven't even received equipment, nor has it even been ordered. Something about leasing it or something - I don't care because if they want me to finish the preparation before I leave then I'll need to see something next week at the latest.
Hotspot Website Details
Tuesday, March 06, 2007
A change in career
Today I gave notice at AccessPlus. I have accepted a position as systems administrator at the Australia Zoo beginning the 5th of April.
My primary role will be administrating two web servers located in a datacentre in the US and looking after the local e-mail services. I will also assist in desktop support.
Still yet to put detail into my initial plans for the new job but I have some lofty goals in mind for the current environment. I'm hoping my responsibilities will expand into the netadmin side of things, it's only natural for me to pursue my comfort zone no?
As a result of this move I have lost my webhosting capability at AccessPlus. Thus I have set up a Google Applications Account and shifted my domain over - so now www/mail/blog/docs/start.naturalnetworks.net all point to one google app or another. I'll be using this as a kind of fancy wiki - documenting my knowledge and publishing articles. Still need a place to store downloads though.
My primary role will be administrating two web servers located in a datacentre in the US and looking after the local e-mail services. I will also assist in desktop support.
Still yet to put detail into my initial plans for the new job but I have some lofty goals in mind for the current environment. I'm hoping my responsibilities will expand into the netadmin side of things, it's only natural for me to pursue my comfort zone no?
As a result of this move I have lost my webhosting capability at AccessPlus. Thus I have set up a Google Applications Account and shifted my domain over - so now www/mail/blog/docs/start.naturalnetworks.net all point to one google app or another. I'll be using this as a kind of fancy wiki - documenting my knowledge and publishing articles. Still need a place to store downloads though.
Thursday, March 01, 2007
Revised Hotspot Interface
Things have come a long way since I started to develop my own hotspot backend. It now has PayPal and Subscriber support. The administration backend was written by another lad but essentially that's just a PHP interface to the database. I few extra scripts such as the PayPal IPN receiver and a subscriber account scrubber cron job runs in the background.
The interface supports Mikrotik RouterOS and the Colubris MSC series. It's not difficult to add different Access Controller types. Once again I will be looking at making the Access Controller stuff modular to make additions easier.
At some stage I will write in my own administration interface. Its a fairly large task in its own right when you consider that I have to support multiple locations with their own pricing plans with various pricing items. Throw in all the user management and invoicing/revenue sharing among locations and its a complicated task that requires a fair bit of thought.
I'm definitely getting better at coding in Perl at least. I still have a fair bit to learn about more advanced aspects of the language such as Object Oriented Programing (OOP) and package/module writing, probably best to start on the packages/modules first.
The interface supports Mikrotik RouterOS and the Colubris MSC series. It's not difficult to add different Access Controller types. Once again I will be looking at making the Access Controller stuff modular to make additions easier.
At some stage I will write in my own administration interface. Its a fairly large task in its own right when you consider that I have to support multiple locations with their own pricing plans with various pricing items. Throw in all the user management and invoicing/revenue sharing among locations and its a complicated task that requires a fair bit of thought.
I'm definitely getting better at coding in Perl at least. I still have a fair bit to learn about more advanced aspects of the language such as Object Oriented Programing (OOP) and package/module writing, probably best to start on the packages/modules first.
Trouble Shooting Process
This has been hanging on my wall for sometime now, thought I'd record it here just in case it gets lost.
1. Initial issue description
2. Collect further information
3. Define the issue
4. Document and create brief
5. Identify associated systems and subsystems
6. Devise and apply tests
7. Assess and document test results
8. Develop and assess solutions
9. Implement and monitor solutions
10. Document outcome
1. Initial issue description
2. Collect further information
3. Define the issue
4. Document and create brief
5. Identify associated systems and subsystems
6. Devise and apply tests
7. Assess and document test results
8. Develop and assess solutions
9. Implement and monitor solutions
10. Document outcome
Tuesday, February 13, 2007
Friday, January 12, 2007
University Accommodation Network Design
I have been in the process of designing the network topology and enacting it within a Mikrotik RouterOS 2.9.38 configuration. It has been an interesting exercise since its not everyday you get to design a new Internet access system around a quality physical infrastructure.
The Universityhas a Cisco switched network throughout the on-campus student accommodation campuses. This entails Catalyst 29xx and 3550 switches running VLANs and trunked into a route/switched core network.
Particulars about the environment:
This will give the guests an option of using the Hotspot or the PPPoE service to connect. I would expect most will use the Hotspot given its simplicity however there will be the power user who will want to run a 24x7 connection using a broadband router, possibly wishing to have a public IP to run other services.
Update:
I have modified the plan and I will now use three servers - one per campus. The main reason for this is to simplify the configuration on each of the servers and provide better resources to each campus. The VLANs will still be in place however I will still need to use a bridge on each to combine the two to offer both a hotspot and a PPPoE service too. Running both a Hotspot and a PPPoE service on the one interface is generally frowned upon - I will investigate the inclusion of a single PPPoE server that services all three campuses.
Network Topology including services Revision 5:
Network Topology including services Revision 2:
Network Topology including services Revision 1:
The Universityhas a Cisco switched network throughout the on-campus student accommodation campuses. This entails Catalyst 29xx and 3550 switches running VLANs and trunked into a route/switched core network.
Particulars about the environment:
- 100Mbit to each room, 1Gbit between campuses
- Each campus has approximately 300 units - a total of 950 units
- There are two VLANs per campus
This will give the guests an option of using the Hotspot or the PPPoE service to connect. I would expect most will use the Hotspot given its simplicity however there will be the power user who will want to run a 24x7 connection using a broadband router, possibly wishing to have a public IP to run other services.
Update:
I have modified the plan and I will now use three servers - one per campus. The main reason for this is to simplify the configuration on each of the servers and provide better resources to each campus. The VLANs will still be in place however I will still need to use a bridge on each to combine the two to offer both a hotspot and a PPPoE service too. Running both a Hotspot and a PPPoE service on the one interface is generally frowned upon - I will investigate the inclusion of a single PPPoE server that services all three campuses.
Network Topology including services Revision 5:
Network Topology including services Revision 2:
Network Topology including services Revision 1:
Friday, January 05, 2007
Nifty WiFi Configuration for RouterOS 2.9.38
An unusual situation prompted me to create a rather elaborate WDS configuration between three APs. Originally it was meant to be a simple AP with two clients, the clients would be configured as wireless bridges using WDS. However one of the two clients ended up not having good line of site back to the AP, so I had to get creative and create another WDS link that bounces of the other client, while preserving the important services...
Each AP/Client has three interfaces - 1 Ethernet, 1 CM9 and 1 SR5. Originally the SR5's where meant to connect to each other (AP/Clients) and the CM9's were there for backup/hotspot access. The Ethernet ports connect to the LAN/PPPoE network at each client site.
Basically I created a Virtual AP on the "Varsity" hotspot interface and turned on WDS, made it so that it adds WDS interfaces to the 'Backbone' bridge, which is shared with the ethernet and SR5 interfaces. This way the normal Hotspot AP can continue to function as normal although the interface will be under extra load.
Then on "The Village" hotspot interface I set its primary role to 'station wds' and created a VirtualAP to run its hotspot onto. This allows me to maintain the hotspot on this interface.
However the direct link back to the AP worked fine, so the link between the two clients was set up on STP for auto-failover.
Topology:
Each AP/Client has three interfaces - 1 Ethernet, 1 CM9 and 1 SR5. Originally the SR5's where meant to connect to each other (AP/Clients) and the CM9's were there for backup/hotspot access. The Ethernet ports connect to the LAN/PPPoE network at each client site.
Basically I created a Virtual AP on the "Varsity" hotspot interface and turned on WDS, made it so that it adds WDS interfaces to the 'Backbone' bridge, which is shared with the ethernet and SR5 interfaces. This way the normal Hotspot AP can continue to function as normal although the interface will be under extra load.
Then on "The Village" hotspot interface I set its primary role to 'station wds' and created a VirtualAP to run its hotspot onto. This allows me to maintain the hotspot on this interface.
However the direct link back to the AP worked fine, so the link between the two clients was set up on STP for auto-failover.
Topology:
Subscribe to:
Posts (Atom)