Thursday, December 3, 2009
Wednesday, November 11, 2009
Port Testing
You can use the following troubleshooting steps that are appropriate for the type of problem that you are experiencing. For example, if you are having problems sending over SMTP between two of your Microsoft Exchange 2000 Server servers, you can test the SMTP connectivity by using Telnet on the sending server to connect to port 25 on the destination server. By default, SMTP listens on port 25. Alternatively, if you are having problems receiving SMTP mail from the Internet, you can follow the steps that are listed in this article to test connectivity to your SMTP server from a host that resides on the Internet and that is not on your network.
NOTE: This article only outlines information for a connectivity test for messaging with Exchange Server, if you are not able to connect to the Exchange Server, please search the KB for other symptoms or error messages you are experiencing. For additional information on troubleshooting Exchange transport issues, please refer to the following article in the Microsoft knowledgebase:
There are several different variations of SMTP in the Microsoft product line. The Microsoft Windows product line has an SMTP service that is included with Internet Information Services (IIS), and in Microsoft Windows NT Server 4.0, the SMTP service was included in the Option Pack. In more recent versions of Windows, IIS has been integrated in the operating system, and you can add IIS by using Add or Remove Programs in Control Panel. Additionally, Exchange 2000 and Microsoft Exchange Server 2003 use the existing SMTP service from IIS with additional features. Microsoft Exchange Server versions 4.0, 5.0, and 5.5 all come with their own versions of SMTP in the form of the Internet Mail Connector (IMC) or Internet Mail Service (IMS).
Note In Exchange 5.0 and later, the Internet Mail Connector (IMC) is renamed the Internet Mail Service.
Before you start the Telnet session, you must have the full SMTP e-mail address of the destination user who you want to send this test message to. This e-mail address must be in the following format:
Make sure that SMTP has started on the server that runs the SMTP service. To test if SMTP has started, you can run the basic tests that are listed in this article and verify that you receive the 220 response from the remote server. This also verifies that SMTP is running.
Notes
- Some Telnet applications require you to turn on local echoing to see the commands that you are typing. To do this while in a Microsoft Telnet session, type set local_echo at the command prompt.
- In Microsoft Windows XP, type set localecho instead of set local_echo.
Basic Testing
Note Microsoft Telnet does not permit you to use the Backspace key. If you make a mistake when you type a command, you must press ENTER, and then start a new command.
In the following steps, you run Telnet from the command line. To open a command line, Click Start, click Run, type cmd in the Open box, and then click OK.
- You can start a Telnet session by using the Telnet command in the following format:
Note Press ENTER after you type each line.telnet servername portnumberFor example, type:telnet mail.contoso.com 25Note You can replace servername with the IP address or the FQDN of the SMTP server that you want to connect to. Remember to press ENTER after each command.
If the command works, you receive a response from the SMTP server that is similar to the following:Note There are different versions of Microsoft SMTP or third party SMTP servers, and you may receive different responses from the receiving server. What is important is that you receive the 220 response with the FQDN of the server and the version of SMTP. Additionally, all versions of Microsoft SMTP include the term "Microsoft" in the 220 response.220 site.contoso.com Microsoft Exchange Internet Mail Connector
- Start communication by typing the following command: EHLO test.comNote You can use the HELO command, but EHLO is a verb that exists in the Extended SMTP verb set that is supported in all current Microsoft implementations of SMTP. It is a good idea to use EHLO, unless you believe that there is a problem with the Extended SMTP Verbs.
If the command is successful, you receive the following response:250 OK
- Type the following command to tell the receiving SMTP server who the message is from: MAIL FROM:Admin@test.comNote This address can be any SMTP address that you want, but it is a good idea to consider the following issues:
- Some SMTP mail systems filter messages based on the MAIL FROM: address and may not permit certain IP addresses to connect or may not permit the IP address to send e-mail to the SMTP mail system if the connecting IP address does not match the domain where the SMTP mail system resides. In this example, that domain is test.com.
- If you do not use a valid e-mail address when you send a message, you cannot determine if the message had a delivery problem, because the non-delivery report (NDR) cannot reach an IP address that is not valid. If you use a valid e-mail address, you receive the following response from the SMTP server:
250 OK - MAIL FROM Admin@test.com
- Type the following command to tell the receiving SMTP server whom the message is to.
Note It is a good idea to always use a valid recipient SMTP address in the domain that you are sending to. For example, if you are sending to john@domain.com, you must be certain that john@domain.com exists in the domain. Otherwise, you will receive an NDR.
Type the following command with the SMTP address of the person you want to send to:RCPT TO: User@Domain.ComYou receive the following response:250 OK - Recipient User@ Domain.Com
- Type the following command to tell the SMTP server that you are ready to send data: DATAYou receive the following response:
354 Send data. End with CRLF.CRLF
- You are now ready to start typing the 822/2822 section of the message. The user will see this part of the message in their inbox. Type the following command to add a subject line: Subject: test messagePress ENTER two times. You do not receive a response from this command.
Note The two ENTER commands comply with Request for Comments (RFC) 822 and 2822. 822 commands must be followed by a blank line. - Type the following command to add message body text: This is a test message you will not see a response from this command.
- Type a period (.) at the next blank line, and then press ENTER. You receive the following response:
250 OK
- Close the connection by typing the following command: QUITYou receive the following response:
221 closing connection
- Verify that the recipient received the message that you sent. If any error event messages appear in the application event log, or if there are problems receiving the message, check the configuration or the communication to the host.
Advanced Testing
To request a delivery receipt for the test message, see step 4 in the "Basic Testing" section of this article to make sure that the information provided is a valid e-mail address that can receive the delivery receipt. Then in step 5 in the "Basic Testing" section of this article, type the following command in the Telnet session:
Thursday, October 29, 2009
Backup
Mondo Rescue is a GPL disaster recovery solution. It supports Linux (i386, x86_64, ia64) and FreeBSD (i386). It's packaged for multiple distributions (RedHat, RHEL, SuSE, SLES, Mandriva, Debian, Gentoo).
It supports tapes, disks, network and CD/DVD as backup media, multiple filesystems, LVM, software and hardware Raid.
You need it to be safe.
Monday, August 31, 2009
Saturday, August 1, 2009
cobalt raq rom upgrade
Updating the ROM
Most personal computers contain a small amount of ROM that stores critical programs such as the program that boots the computer. In addition, ROMs are used extensively in calculators and peripheral devices such as laser printers, whose fonts are often stored in ROMs.
Cobalt RaQ's have a ROM inside them with the primary boot kernel and some utilities are kept. This software is like a PC's BIOS. The ROM in RAQ's is the main thing that makes them different from any other x86 machine.
In order to install Strongbolt OS on RAQ's we need to make sure that the right ROM version installed. The ROM version required is 2.10.3. The full details of this ROM can be found here: http://sourceforge.net/projects/cobalt-rom
There are a few methods to update the ROM, we shall discuss them below.
ROM Versions
There has been several versions of the Cobalt ROM over the years, they range from 2.3.0 to 2.10.3.
Most RaQ3's have a 2.3x version of the ROM. Some of these older ROMS can be problematic.
Most RaQ4's have a newer ROM. Most of these ROM versions are automatically updated by the Strongbolt Install CD.
We have never had any problems related to RAQ550 ROM's. These are updated automatically by the Strongbolt install CD.
ROM Version | Updates Automatically with Strongbolt CD | Updates using "recovery method" on Strongbolt CD | Other Method Required |
RaQ3 2.3.0 ROM | no | no | yes |
RaQ4 2.3.34 ROM | no | yes | no |
RaQ4 2.3.40 ROM | yes | n/a | no |
RaQ550 ROM | yes | n/a | no |
Other method required could be Using Cobalt OS and using The Advanced Method.
Using the Strongbolt CD to update the ROM
We have built into the install disk some recovery tools in order to help rescue a system.
The recovery method can also be used to update the ROM (on a RaQ3/4 with ROM versions 2.3.34+). Do not attempt this with a RaQ550, it will damage it! RaQ550's will update automatically using the Strongbolt install CD
Please follow the following instructions in order to use the install disk as a recovery console.
- Boot the recovery disk with the RaQ attached by a network cable. On the early ROM's (2.3.0 - 2.3.34) you will see that the LCD output does not change (see inllustration below).
- Launch a terminal on the PC with the Strongbolt install disk booted. This is done by right clicking on the desktop area and choosing "XShells > Dark"
- Type "sudo ssh -l root 192.168.0.254"
- At the password prompt - type "admin"
Once logged into the system recovery console (as described above) the following commands are available to perform a manual ROM update:-
There are 3 rom types for RaQ 3/4's: Intel, AMD, and ST
For RaQ 3/4's with an Intel ROM:
flashtool -w /install/rom/cobalt-2.10.3-ext3-1M.rom
For RaQ 3/4's with an AMD or ST ROM:
flashtool550 -w /install/rom/cobalt-2.10.3-ext3-1M.rom
After performing any of the above methods, reboot using :
reboot
Using Cobalt OS to update the ROM
If you have a very early ROM (2.3.0) and you still have the Cobalt OS on the hard drive, you can easily update the ROM using the following commands. Do not attempt this with a RaQ550, it will damage it! RaQ550's will update automatically using the Strongbolt install CD
Power up the RaQ and gain ssh access to it.
If the ROM is an AMD/ST do:
wget http://www.osoffice.co.uk/linux/roms/flashtool-amd-st
or if the ROM is an INTEL do:
wget http://www.osoffice.co.uk/linux/roms/flashtool-intel
Which ever type of ROM you have, execute the followings commands:
wget http://www.osoffice.co.uk/linux/roms/cobalt-2.10.3-ext3-1M.rom
Use the following command to become root:
su
(it will prompt you for a password. The password is the same as the admin password).
if the ROM is an AMD or ST do:
chmod +x flashtool-amd-st && ./flashtool-amd-st -w cobalt-2.10.3-ext3-1M.rom
if it is an Intel do:
chmod +x flashtool-intel && ./flashtool-intel -w cobalt-2.10.3-ext3-1M.rom
the reboot using
reboot
Strongbolt ROM Flashing Advanced Method
If the install disk does not successfully flash the ROM to the required version, and you have a Null modem serial connection, the following method can be used in order to update the ROM to the required version.
Do not attempt this with a RaQ550, it will damage it! RaQ550's will update automatically using the Strongbolt install CD
The ROM can be updated through the serial port. This requires a Linux PC which allows
two processes (terminal sessions) to use the serial port at the same time.
Replace "cobalt-1M.rom" below with the filename of the ROM image you are
installing. And ttyS0 assume com1 if com2 ttyS1.
Get into the ROM Menu Mode
For 2.9.x ROM's >
----------------------------------------------------
Cobalt:Main Menu> eeprom
Cobalt:EEPROM Menu> write_eeprom
Re-Initializing flash: done
EEPROM in bank 0 is 1024KB (AMD AM29F080B)
Loading ROM image from serial port...
now switch to another terminal session and
cat cobalt-1M.rom > /dev/ttyS0
the ROM will display a kbyte counter:
1024k done
Press any key to abort... 3 2 1
Erasing eeprom in bank 0: done
Writing eeprom in bank 0: 0x00100000:0x00100000 done
Verifying eeprom in bank 0: 0x00100000:0x00100000 done
Cobalt:EEPROM Menu> main
Cobalt:Main Menu> reboot
Rebooting - please wait............
----------------------------------------------------
For 2.3.x ROM's
----------------------------------------------------
Cobalt:Main Menu> boot
Cobalt:Boot Menu> dl_kernel
Loading Kernel: -
now switch to another terminal and
cat cobalt-1M.rom > /dev/ttyS0
the ROM will display a spinning char
Loading Kernel: done
Cobalt:Boot Menu> main
Cobalt:Main Menu> eeprom
Cobalt:EEPROM Menu> write_eeprom 0
Erasing eeprom in bank 0: done
Writing eeprom in bank 0: 0x00100000:0x00100000 done
Verifying eeprom in bank 0: 0x00100000:0x00100000 done
Cobalt:EEPROM Menu> main
Cobalt:Main Menu> reboot
Rebooting - please wait.............
----------------------------------------------------
Tuesday, July 28, 2009
solar fans
solar gable fan (attach in below existing attic vent)
Wednesday, July 8, 2009
create ssh keys
---cut---
94 ssh-keygen -t dsa
95 scp id_dsa.pub root@ip23:/root/.ssh/authorized_keys2
96 ssh ip23
---cut---
HOWTO: set up ssh keys
Paul Keck, 2001
Getting Started
1. First, install OpenSSH on two UNIX machines, hurly and burly. This works best using DSA keys and SSH2 by default as far as I can tell. All the other HOWTOs I've seen seem to deal with RSA keys and SSH1, and the instructions not surprisingly fail to work with SSH2.
2. On each machine type ssh somemachine.example.com and make a connection with your regular password. This will create a .ssh dir in your home directory with the proper perms.
3. On your primary machine where you want your secret keys to live (let's say hurly), type
ssh-keygen -t dsa
This will prompt you for a secret passphrase. If this is your primary identity key, make sure to use a good passphrase. If this works right you will get two files called id_dsa and id_dsa.pub in your .ssh dir. Note: it is possible to just press the enter key when prompted for a passphrase, which will make a key with no passphrase. This is a Bad Idea ™ for an identity key, so don't do it! See below for uses of keys without passphrases.
4.
scp ~/.ssh/id_dsa.pub burly:.ssh/authorized_keys2
Copy the id_dsa.pub file to the other host's .ssh dir with the name authorized_keys2.
5. Now burly is ready to accept your ssh key. How to tell it which keys to use? The ssh-add command will do it. For a test, type
ssh-agent sh -c 'ssh-add < /dev/null && bash'
This will start the ssh-agent, add your default identity(prompting you for your passphrase), and spawn a bash shell. From this new shell you should be able to:
6.
ssh burly
This should let you in without typing a password or passphrase. Hooray! You can ssh and scp all you want from this bash shell and not have to type any password or passphrase.
Using X Windows
Now this is all well and good, but who wants to run their whole life from a single bash instance? If you use an X window system, you can type your passphrase once when you fire up X and all subprocesses will have your keys stored.
1. In the ~/.xinitrc file, modify your line which spawns windowmaker to read:
exec ssh-agent sh -c 'ssh-add
This will prompt you for your passphrase when you start up X, and then not again. All shells you spawn from X will have your keys stored.
2. This brings up a security issue- if someone comes upon your X session, they can spawn ssh sessions to burly and other hosts where you have put your id_dsa.pub information into the authorized_keys2 file. A locking screensaver like xlock is vital.
Different usernames
By default ssh assumes the same username on the remote machine. If you have a different username on the other machine, follow the normal ssh procedure:
[pkeck@hurly /]$ ssh -l paulkeck burly
More keys!
You are not limited to one public key in your authorized_keys2 file. Append as many as you like! If you, say, generated a unique private key on every machine you log into, and then appended the id_dsa.pub files to each of the other machines' authorized_keys2 file, you'd have the equivalent of a .rhosts file with two added benefits:
* Someone would need to know your passphrase to use it, so a cracker gaining access to an account on one machine will not jeopardize the other accounts. (If you foolishly use the same passphrase or, heaven forbid, id_dsa file on all the hosts, it would make it easier to exploit, so don't do that.)
* Traffic is encrypted.
This command will do it without requiring an scp and vi session:
cat foo.pub |ssh burly 'sh -c "cat - >>~/.ssh/authorized_keys2"'
Single-purpose keys
So now you're sshing and scping your brains out. Sooner or later you'll come across one or both of these situations:
1. You want to automate some ssh/scp process to be done after hours, but can't because no one will be around to type the passphrase.
2. You want to allow an account to do some sort of ssh/scp operation on another machine, but are hesitant to append a key to your authorized_keys2 file because that essentially "opens the barn door" to anything that other account wants to do, not just the one operation you want to let it do. (This is the situation if you use a .shosts file.)
Single-purpose keys to the rescue!
1. Make yourself another key:
ssh-keygen -t dsa -f ~/.ssh/whoisit
Just press return when it asks you to assign it a passphrase- this will make a key with no passphrase required. If this works right you will get two files called whoisit and whoisit.pub in your .ssh dir.
2. cp ~/.ssh/whoisit.pub tempfile
We want to work on it a little. tempfile should consist of one really long line that looks kind of like this:
ssh-dss AAAAB3NzaC1k[...]9qE9BTfw== pkeck@hurly.example.com
3. Edit tempfile and prepend some things to that line so that it looks like this:
command="echo I\'m `/usr/ucb/whoami` on `/usr/bin/hostname`",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-dss AAAAB3NzaC1k[...]9qE9BTfw== whoisitnow
That will do what we want on Solaris; to try this example on Linux use this:
command="echo I\'m `/usr/bin/whoami` on `/bin/hostname`",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-dss AAAAB3NzaC1k[...]9qE9BTfw== whoisitnow
The stuff to prepend is your command that will be run when this key is activated, and some options to keep it from being abused (hopefully). The last thing on the line is just a comment, but you probably want to set it to something meaningful.
Also, most examples I see use no-pty as an additional option, but this messes up the carriage-return/linefeediness of the output of the above example. (Try it.) I haven't looked into it enough to see why you would want it, but there you go.
4. cat tempfile |ssh burly 'sh -c "cat - >>~/.ssh/authorized_keys2"'
Append tempfile to your authorized_keys2 file on burly.
5. To "activate" (or perhaps "detonate") the key from hurly (or anywhere that has the secret key), do this (maybe there is a better way?):
ssh -i ~/.ssh/whoisit burly
The following also works but is cumbersome:
ssh-agent sh -c 'ssh-add ~/.ssh/whoisit < /dev/null && ssh burly'
You can also append this "command key" to a different account's authorized_keys2 file and trigger it from a different username. You just need the secret key. Like so:
ssh -i ~/.ssh/whoisit -l paulkeck burly'
The next leap in the pattern is something like this:
ssh -i /home/pkeck/.ssh/whoisit -l paulkeck burly'
This could be run by any user on the box if they could read your secret key, so always keep your .ssh dir and all your keys chmodded to 700 and 600 respectively.
6. You could make single-purpose keys with commands to (haven't tested all these):
*
mt -f /dev/nst0 rewind
Rewind a tape on a remote machine
*
nice -n 19 dd of=/dev/nst0
Send STDIN to that tape drive. Maybe STDIN is a tar stream from tar -cf -.
*
nice -n 19 dd if=/dev/nst0
Read stuff from there to my STDIN
*
cat claxon.au > /dev/audio
Play an alarm noise on a remote machine
*
cat - > /dev/audio
Play any sound you send on STDIN
*
cat - > /etc/dhcpd.conf
Replace /etc/dhcpd.conf with some stuff from STDIN on the triggering machine (sounds like a temp file would be better)
*
*
*
*
*
*
You can send the stuff on STDIN with something like this on the triggering machine:
ssh-agent sh -c 'ssh-add ~/.ssh/whoisit < /dev/null && cat alarm.au | ssh burly'
or
ssh-agent sh -c 'ssh-add ~/.ssh/whoisit < /dev/null && tar cf - /home/pkeck | ssh burly'
Maybe for that one the corresponding command to "catch" that stream would be:
cat - > ~/backups/pkeck.tar.`date +%Y%m%d.%H-%M-%S`
You get the idea! Go crazy!
Tape examples from Ed Cashin's Gettin' Fancy with SSH Keys, my inspiration for getting into this whole thing!
Saturday, July 4, 2009
Bubble Tea
Lychee Green Tea
Courtesy of Lollicup - Serves 1
Ingredients
2 ounces lychee concentrate available at Asian food stores
1 ounce sugar syrup
1/2 cup ice
13 ounces chilled freshly brewed loose leaf green tea (leaves removed after brewing)
Preparation
Mix lychee concentrate, sugar syrup and ice together. Add in loose leaf green tea. Mix in a shaker for 10 seconds.
Mango Bubble Slush
Courtesy of Lollicup - Serves 1
Ingredients
1/2 cup mango pieces
2 ounces fresh mango pulp (available at The Fresh Market or the ethnic food aisle at local grocery stores)
1 ounce sugar syrup
4 ounces water
2 cups ice
Preparation
Blend mango pieces, fresh pulp and sugar syrup together with water and ice.
Blend on low speed for an icier slush, high speed for a thicker slush. Blend on medium speed for the best result.
Add tapioca pearls (prepared acording to package directions) if desired
Taro Bubble
Courtesy of Lollicup - Serves 1
Ingredients
2 cups water
2 cups chopped taro (available at
local Asian food stores)
1/2 teaspoon salt
1 cup sugar
1 cup low-fat coconut milk
Preparation
Add taro, salt, sugar and coconut milk to boiling water. Let boil for 15 minutes. Cool and let refrigerate for an hour or until thickened.
Avocado Bubble Snow
Courtesy of Lollicup - Serves 1
Ingredients
Half avocado
2 teaspoons sugar
1 tablespoon condensed milk
2 cups ice
5 ounces low-fat milk
Preparation
Blend avocado, sugar, condensed milk, ice and low-fat milk on low to medium speed until smooth.
Bubble Tea may seem foreign at first, but it's easier to make than you might think. The tapioca pearls and jellies can be found at Asian food stores. Before adding pearls to a tea, boil them for about 15 minutes and then wash them under cool water.
Friday, July 3, 2009
set up dual network cards
Hacks to Fix Routing Problems on Linux
In working with linux boxes in Glued environment, we have had cause to have a couple of boxes with multiple IP addresses, and the default Red Hat network configuration scripts do not seem to work correctly for what we needed in some of these cases. This document lists the problems seen and some hacks to fix them.
Note: These fixes are called hacks for a reason. While I do use them on several production machines I am responsible for, they are hacks and there is no guarantee of any kind that they will work, or even not make things worse, for you. Use at your own risk. While constructive comments are welcome, and I'll even entertain requests for assistance with these hacks, assistance to others comes after my own work obligations and my free time is severely limitted.
Problems encountered, and hacks to fix them
The following list enumerates the problems seen:
- Single box with 2 NICs on different subnets: Only one NIC is visible outside the two local subnets (or possibly from the alternate local subnet).
- Single box with 1 NIC, but multiple IP addresses on the one NIC: AFS/kerberos issues. Machine will boot, but can only login on the console, and even then cannot su to root. Get an 'invalid IP address' when attempt to ksu. (This may also affect boxes with multiple NICs each with a separate IP address all on the same subnet, but I have not tested that situation.)
IP addresses on different subnets
The situation: A single box has multiple NICs in it, each connected to a different subnet (and therefore with distinct IP addresses). For specificity in the following, let us assume it has two NICs, one NICA having an IP address IPaddrA on the subnetA subnet. The other, NICB, has IP address IPaddrB on the subnetB subnet.
The symptoms: All machines on subnetA can see the box using IPaddrA. Similarly, boxes on subnetB can see the box using IPaddrB. I believe you should also be able to see either address ( IPaddrA or IPaddrB ) if on the other subnet ( subnetB or subnetA, respectively), but won't guarrantee it. The problem is that outside hosts, not on either local subnet (neither subnetA nor subnetB ) can only see the machine using one of the two addresses, and get no response from the other one.
Specifics: Observed with Glued Red Hat Enterprise Edition v3 for x86 based processors. Mainly seen on one box, a Dell PowerEdge 1650 with dual onboard Intel 82544EI NICs.
My analysis: Let us assume that it is IPaddrA which is visible from the outside world, and IPaddrB that is blocked. What appears to be happening is that both NICs function properly with respect to traffic on their own subnet. IPaddrA functions properly even for stuff not on subnetA; when a machine on some other net tries to contact, the subnetA gateway sends the packets to NICA, and the response goes out on NICA back to the gateway, with a source address of IPaddrA and the foreign machines IP address.
When a machine not on subnetB tries to talk to IPaddrB, things start the same. The subnetB gateway sends the packets to NICB, the linux box decides how to respond, and a response is sent out. However, the response goes out on NICA but with the IPaddrB source address. If the machine trying to be reached is on subnetA, the packets seem to get to the destination and no one complains. But if the packets are for another subnet, the router drops the packets because the source address is illegal for subnetA (as it is IPaddrB which is a subnetB address).
Hack to fix it: In the rc.machine
file, use the /sbin/ip
command to set up a somewhat more complicated routing scenario with a separate routing table for each subnet. For each subnet, the routing table simply goes out through the NIC if local, or through the NIC to the appropriate gateway if non-local. Then hook these tables into the routing rule based on the source IP address.
For example, if the two subnets are 172.70.12.0/23 and 172.80.24/23 on and
, respectively, with 172.70.12.1 and 172.80.24.1 as the gateways you can do something like
#Set up the first subnet's routing table (we'll name it 70)
ip route flush table 70
ip route add table 70 to 172.70.12.0/23 dev eth0
ip route add table 70 to default via 172.70.12.1 dev eth0
#Set up the second subnet's routing table (we'll call it 80)
ip route flush table 80
ip route add table 80 to 172.80.24.0/23 dev eth1
ip route add table 80 to default via 172.80.24.1 dev eth1
#Create the rules to choose what table to use. Choose based on source IP
#We need to give the rules different priorities; for convenience name priority
#after the table
ip rule add from 172.70.12.0/23 table 70 priority 70
ip rule add from 172.80.24.0/23 table 80 priority 80
#Flush the cache to make effective
ip route flush cache
Physics typically puts this into a file called rc.linux-dual-net-route-hack
in the sysconfig tree and calls this script from rc.machine
. This seems to work fine, as the primary interface works properly even without the hack, and that is the interface used to communicate with AFS, KDC, etc. servers, so machine seems to boot OK. The extra bit of network connectivity gained by the other NIC can wait until the rc.machine
script gets run.
Multiple IP addresses, single subnet
The situation: A single box has multiple IP addresses on the same subnet (in observed cases, all on the same NIC, not sure if matters). For specificity, assume it has two IP addresses, IPaddrA and IPaddrB on the subnetA subnet.
The symptoms: The machine boots fully and appears to be up and happy. However, network based logins get denied. It is possible to login on the console, but even then some problems. Most notably, attempting to ksu to root yields an error message about wrong target hostname or IP address. Basically, pure Unix stuff works, but a lot of AFS/kerberos related stuff having problems.
Specifics: Observed on a number of Glued Red Hat Enterprise Edition v3 for x86 based processors. Systems observed on include a number of Dell PowerEdge 1650s and 1750's. The systems were all using one of the onboard NICs, which were Broadcom NetXtreme BCM5704 Gigabits and Intel 82544EI Gigabits. In all cases tried, two or three IP addresses were attached to the same NIC. Note: Tried it on a Sun V20 AMD64 box, and the problem was not seen. Not sure why the difference.
My analysis: The presence of multiple IP addresses appears to be causing the system to create a rather complicated route table, with what appears to be 2N-1 default routes where is the number of IP addresses. The basic route
command does not help much, showing something like:
Destination | Gateway | GenMask | Flags | Metric | Ref | Use | Iface |
subnetA | * | maskA | U | 0 | 0 | 0 | NICA |
default | gatewayA | 0.0.0.0 | UG | 0 | 0 | 0 | NICA |
default | gatewayA | 0.0.0.0 | UG | 0 | 0 | 0 | NICA |
default | gatewayA | 0.0.0.0 | UG | 1 | 0 | 0 | NICA |
route
command does not provide information to distinguish much, other than one has metric 1. Using /sbin/ip route
command, we can see a bit more, e.g. entries like:
default via gatewayA dev NICA
default via gatewayA dev NICA src IPaddrA metric 1
I am not an expert at reading route entries, but normally expect to see a single default route on a subnet, corresponding to the second line above (without the src specification.
What appears to be happening (based on interpretation of above and sniffing the network traffic), is that traffic originating from the host to hesiod or KDC or AFS servers appears to be using the second (or last) IP address as the source address. As the primary machine name is based on the first IP address, kerberos is not happy, and all the kerberos stuff appears to fail.
Hack to fix it: The solution appears to be to delete all the existing default routes and add a proper default route. This can be done manually, booting the machine into single user mode, starting up networking (e.g.
route del default
To fix the problem in a more automated fashion, we run the following in the machines rc.machine
, or more typically, create a script rc.linux-multi-ips-on-subnet-route-hack
in the sysconfig tree and run that from the rc.machine
file. The script consists of the lines:
echo "Fixing default route..."
#Get $GATEWAY
. /etc/sysconfig/network
RES=`route | grep default`
while [ "x$RES" != "x" ]
do
route del default
RES=`route | grep default`
done
route add default gw $GATEWAY
echo "default route should be fixed"
Currently, Physics is running this from the rc.machine
file, (directly or indirectly), and this appears to be working. We need to look into it a bit more and ensure nothing requiring kerberos identity is breaking due to the lateness with which this hack is applied.
Wednesday, June 24, 2009
mod rewrite basics
All this and more
---
URL Redirection with Mod-Rewrite
Creating the Rules
The Building Blocks of Mod-Rewrite URL Redirection Rules: Special Characters
Along with regular expressions, mod-rewrite allows for the use of special characters. It's a good thing to understand what these are before you begin writing rules. (Mainly because you need one or more of them in almost every rule.)
RewriteRule tells the server to interpret the following information as a rule.
RewriteCond tells the server to interpret the following information as a condtion of the rule(s) that are immediately after it.
^ defines the begining of a 'line' (starting anchor). Remember, ^ also designates 'not' in a regular expression, so please don't get confused.
( ) creates a variable to be stored and possibly used later, and is also used to group text.
$ defines the ending of a 'line' (ending anchor), and also defines a variable that comes from the RewriteRule (used for variables on the right side of the equasion or to match a variable from the rule in a condition, see example below).
% defines a variable that comes from a rewrite condition. (used for variables on the right side of the equasion only, see example below)
* The right side of the equasion is everything that follows the $ in a RewriteRule.
Examples: All variables are given a number according to the order they appear, the following rule and condition each have two variables, defined by parenthesis, so to use them you would put them where you need them in the results:
(the '-' is for spacing only to make the line more readable, and is not necessary to use variables.)
RewriteRule ^(var1)/no-var/(var2)$ /to-use-variables-type-$1-and-$2
The final result would look like this:
to-use-variables-type-var1-and-var2
RewriteCond %{CONDITION_STUFF} ^(var1)/no-var/(var2)
RewriteRule ^no-var/no-var/no-var$ /to-use-variables-type-%1-and-%2
The final result would look like this:
to-use-variables-type-var1-and-var2
To use a combination of the Condition and Rule Variables
RewriteCond %{CONDITION_STUFF} ^(var1)/no-var/(var2)
RewriteRule ^(var1)/no-var/(var2)$ /to-use-variables-type-$1-and-%2-$2
The final result would look like this: to-use-variables-type-var1-and-var2-var2
The only exception to the above examples is, you can also use the %{CONDITION_STUFF} in the right side of a rule, but it must appear exactly as in the condition: RewriteRule ^(var1)/no-var/(var2)$ /type-%{CONDITION_STUFF}
|(bar) stands for 'or', normally used with text or expressions grouped with parenthesis (EG (with|without) matches the string 'with' or the string 'without'. Keep in mind since these are inside parenthesis, the match is stored as a variable.)
\ is called an escaping character, this removes the function from a 'special character' (EG if you needed to match index.php?, which has both a .(dot) and a ?, you would have to 'escape' the special characters .(dot) and ? with a \ to remove their 'special' value it looks like this: index\.php\?)
! is like the ^ in a regular expression and stands for 'not', but can only be used at the beginning of a rule or condition, not in the middle.
- on the right side of the equasion stands for 'No Rewrite.' (It is often used in conjunction with a condition to check and see if a file or directory exists.)
Mod-Rewrite Directives for URL Redirection
Directives, in mod-rewrite are what give you the control of the response sent by the server when a specific URL is requested. They are an integral part of the rule writing process, because they designate any special instructions that might be needed. (EG If I want to tell everyone a page is moved permanently, I can add R=301 to my rule and they will know.)
Directives follow the rule and the most often used, are enclosed with [ ] (Not all directives are covered here, but the main and widely used ones are.)
[R] stands for redirect. The default is 302, temporarily moved. This can be set to any number between 300 and 400, by entering it as [R=301] or [R=YourNumberHere], but 301 (permanently moved) and 302 (temporarily moved) are the most common.
(If you just use [R] this will work, and defaults to 302, or temporarily moved)
** Do not use this 'flag' or directive if you are trying to have a 'silent' redirect.
[F] stands for forbiden. Any URL or file that matches the rule (and condition(s) if present) will return FORBIDEN to anyone who tries to access them. (Useful for files that you would like to keep private, or you do not want indexed prior to 'going live' with them.)
[G] stands for gone. (It's like Not Found, only different.) Not recommended for use yet, this is a newer rule/message (410 code) and many browsers and user-agents, like googlebot do not understand them yet.
[P] stands for proxy. This creates a type of 'silent redirect' for files or pages that are not actually part of your site and can be used to serve pages from a different host, as though they were part of your site. (DO NOT mess with copywritten material, some of us get very upset.)
[NC] stands for 'No Case' as applied to letters, so if you use this on a rule, MYsite.com, will match mysite.com... even though they are not the same. (This can also be used with regular expressions, so instead of [a-zA-Z], you can use [a-z] and [NC] at the end of the rule for the same effect.)
[QSA] stands for Query String Append. This means the 'query string' (stuff after the ?) should be passed from the original URL (the one we are rewriting) to the new URL.
[L] stands for last rule. As soon as this 'flag' or directive is read, no other rules are processed. (Every rule should contain this flag, until you know exactly what you are doing.)
In an attempt to put together regular expressions and mod-rewrite special characters here are some examples of what they do:
Goal: to match any lowercase words, or group of letters:
Possible Matches: lfie, page, site, or information
Expression: [a-z]+
Explaination: [a-z] matches any single letter. + matches 1 or more of the previous character or string of characters. When you put the two together you have a regular expression that matches any single letter from a to z over and over, until it runs into a character that is not a letter.
Goal: to match any words, or groups of letters, and store them in a variable:
Possible Matches: lfie, Page, site, or InforMation
Expression: ([a-z]+) [NC]
Explaination: Same as above with the addition of () and [NC]. In mod-rewrite, () creates a single variable out of the regular expression, so the word matched is now in a variable. [NC] stands for 'No Case' (from mod-rewrite) makes it so the regular expression or regular text strings, match both upper and lowercase letters, so with this expression you can match any single word.
Goal: to match any word, or group of letters, then any single number, and store them in separate variables:
Possible Matches: lfie1, Page2, site6, or InforMation9
Expression: ([a-z]+)([0-9]) [NC]
Explaination: Same as above, except notice there is no + in the number expression. This way only a single number will match.
Goal: to match any word, or group of letters, then any single number, and store them in the same variable:
Possible Matches: lfie1, Page2, site6, or InforMation9
Expression: ([a-z]+[0-9]) [NC]
Explaination: Same as above, except notice the plus is immediately following (no space) the [a-z], but before the [0-9] (again no space), so the + affects the [a-z], but not the [0-9].
Goal: to match any word, or group of letters, then any group of numbers, and store them in the same variable:
Possible Matches: lfie11, Page2, site642, or InforMation9987653
Expression: ([a-z]+[0-9]+) [NC]
Explaination: Same as above with the addition of a + immediately following to the numerical expression to match 1 or more numbers instead of only 1.
Goal: to match any word, or group of letters, any group of numbers, and any random letters and numbers, which might or might not be mixed together:
Possible Matches: 11, gPaE, s17ite642, or 2CreateInfo4UisCool
Expression: ([a-z0-9]+) [NC]
Explaination: the change here is to the regular expression grouping. Putting a-z and 0-9 in the same grouping followed by [NC] matches any combination of letters and numbers.
Goal: to match any word, or group of letters, then a single /, then any group of numbers, and store only the numbers in a variable.
Possible Matches: lfie/10, gPaE/1, site/642, or CreateInfoUisCool/2474890
Expression: [a-z]+/([0-9]+) [NC]
Explanation: Using the [a-z]+ without () matches the letters as usual. By putting the / outside of any expression, the only thing that will match is the exact character of /. Then using the ([0-9]+) again, stores any group of numbers in a variable.
Goal: to match anything before the / and store it in a variable, then match anything after the / and store it in a separate variable:
Possible Matches: lfie/10.html, gP..aE/1page_two.file, si&#te/642-your-site, or
CreateInfo/245390.php
Expression: ([^/]+)/(.+)
Explaination: Using two new forms of regular expressions, this is actually easier than it may seem. Making use of the ^(not) character, matches anything that is not a / and the () again save it in a variable. Then using the same form as above, the single, exact character of / is matched. Finally, the .(dot) character is used, because it matches any single character that is not the end of a line, and when combined with the + character, matches anything up to a line break. Once again () are used to create the variable. *Also, notice the use of a 'catch-alls' eliminates the need for the [NC] 'flag' of mod-rewrite.
If this was a full regular expression site, I would continue, but you should have an idea of how regular expressions work, so, time to move on...
Things to Remember About Mod-Rewrite URL Redirection
1. If you are using a condition(s) they always relate to the rule(s) that immediately follow them.
2. Mod-Rewrite will always try to match a URL to a rule before it checks the conditions, so if no rules match, the conditions are never checked.
3. After a URL request matches a rule, and changes are applied, the request is sent back to the main configuration file and treated like it is a new URL request.
(This is the cause of an infinite loop, and with a regular expression and variables, it is sometimes easy to miss. The following examples show very simply how it happens... there are cases where two or three or more rules write to each other and have the same effect.)
Pretend someone wanted to go to your site and a visit a page called 'letters.html', but you wanted to redirect them somewhere else like 'numbers.html':
They request the URL:
http://yoursite.com/letters.html
Your rule catches their URL request:
RewriteRule ^([a-z]+)\.html$ /numbers.html [R,NC,L]
Your rule then rewrites their request to the URL:
http://yoursite.com/numbers.html
Their request starts over like it is a new request for the URL:
http://yoursite.com/numbers.html
Your rule catches their URL request, because you are using a regular expression that catches all letters:
RewriteRule ^([a-z]+)\.html$ /numbers.html [R,NC,L]
Your rule then rewrites their request to the URL:
http://yoursite.com/numbers.html
Their URL request starts over like it is a new request for:
http://yoursite.com/numbers.html
Eventually they see 500-server error, or maximum redirects exceeded, or your server melts and your hosting company calls you and wants to know, why? (Kidding about the server melting, but rewrite rules can be placed in the server set-up and force a restart, to break an infinite loop.)
Keep in mind, this did not happen, because your rule didn't work... It worked too well, over and over and over...
Tuesday, June 16, 2009
USBConnect Mercury
- Update the file /usr/share/hal/fdi/information/10freedesktop/10-modem.fdi by adding the following lines at the top of the section marked with the "" as shown below:
- Before:
IS-707-A
- After (add the highlighted lines):
GSM-07.07
GSM-07.05
IS-707-A
- Restart hal using the following command:
- Insert the Sierra Wireless USBConnect Mercury modem into a USB port of the machine. NetworkManager should prompt you for modem configuration in a few seconds.
AT&T USBConnect QuickSilver
Monday, December 1, 2008
Now to Get 3G Wireless Working (AT&T USBconnect Quicksilver)
If you have Ubuntu running on a PC, Here are the steps to successfully get the AT&T USBconnect Quicksilver. I have to give a HUGE thanks out to Paul at PHARscape for helping me get this working. While Ubuntu 8.10 has a connection manager capable of configuring a WAN wifi connection, it does not support all of the latest USB devices at this time. Have no fear, it is really rather simple to get working now that there are files you can easily install via .deb.So to get started you will want install a repo to enable access to rezero. You need this becasuse this USB modem has the drivers built into the device. When you insert the USB modem with out the use of rezero it will register as a CD and not as a Modem. Rezero will switch the devices mode from storage device to modem, which is what you want. option has made available an updated version of this software called "ozerocdoff". But I could not figure out how to install it and rezero worked just fine for me.
Install Rezero:
Paste this into your list of third-party repositories in synaptic. Located in System -->Administration -->Synaptic.
deb http://ppa.launchpad.net/martijn/ubuntu intrepid mainRemember to click reload. Then search for Rezero and install it. Now if you plugin your USB modem it will no loger be seen as a storage device.
Tuesday, June 2, 2009
Debt Avalanche
The Correct Way to Pay Off Personal Debt: The Debt Avalanche
by Flexo
When it comes to mathematics, certain facts are universally agreed-upon. For example, regardless of your culture or educational system, you must agree that one plus one equals two unless you mistakenly fall for an invalid proof. When dealing with money, why are people inclined to believe that one plus one does not equal two?
If you have a certain amount of money available to pay off a portion of your debt each month, even if that certain amount changes, there is a mathematically correct way of paying off that debt. You can call this approach the Debt Avalanche. It is similar to Dave Ramsey’s popular “debt snowball” method, with one small but important detail: With the Debt Avalanche you will pay off your debt faster and pay less total interest to banks and lenders.
The simple calculation for the Debt Avalanche requires only the interest rates for each debt account. This assumes that all debt accounts have the same tax liability, but if that’s not the case, determine your interest rate after taxes for this calculation.
Step 1. Order your debts from highest interest rate to lowest. You may find credit cards at the top of the list. It’s typical to see interest rates from 10% to 20% or more. Credit cards offered by stores often have the highest interest rates, so you might find these at the very top. Watch out for promotional rates ending, which they may do on the date promised when you enrolled, or earlier. Card issuers also re-evaluate their customers every so often, and will not think twice about raising your rates midstream. Note that if your credit improves, they will not magically lower your rates. While lenders will notify you if they intend to raise your rates, you may have missed the notice.
Your mortgage and home equity loan may be the next debts in line. It’s important for your list to capture every debt for which you make a monthly payment. Student loans may be the last on the list, particularly if you qualify for tax credits. The Debt Avalanche formula won’t work properly if it covers only a portion of your debt, so consider all accounts.
Order your list from the highest interest rate (after tax) to the lowest. You may have noticed we didn’t factor in your account balances in the above formula. That is because your individual account balances are irrelevant. The issue solved by the Debt Avalanche is the best way to pay off your total debt with all available funds.
Step 2. Pay the minimum to all debts every month. If you’re writing down your list, or using a spreadsheet like Excel, add a column next to each debt to list its minimum monthly payment. This is the amount you will pay towards each debt, except for the one account listed at the top of the list.
Another column should list the payment due date if it is relatively static from month to month. For example, my credit card payment is due on the last date of almost every month, so I would write “30.” This would indicate to me the last date of every month. Your payments should always arrive before the due date. In fact, in some cases, you can reduce your total interest paid by paying weeks in advance of your due date.
Step 3. To your debt with the highest interest, send all extra available cash. If you have an emergency fund, this step is simple. Since it’s unlikely that you can earn more in savings than you can “earn” (reclaim) by paying off your debt, all your unused income after paying expenses (necessary and discretionary as you see fit) should be dedicated towards the debt account with the highest interest rate.
Step 4. Repeat every month. You cover all your bases by ensuring every creditor receives the minimum payment, but you hone in on only your debt with the highest interest. Once a debt account has been eliminated — and it may not be the account at the top of the list if other balances are smaller — remove it from the list and re-order if interest rates have changed.
It’s that simple. This is mathematically the best method for paying off your personal debt. No other method will get you out of debt faster and save you as much money.
Despite the facts, many people disagree. The primary reason detractors, or supporters of the “debt snowball” method, may argue is that Dave Ramsey’s method will help you pay off your smaller debt faster, providing you with “early success” and possibly the motivation to continue along the path of debt reduction. The Debt Avalanche will also provide early success, but if you need special motivation to continue your monthly payments, consider this: By choosing the Debt Avalanche method, you will pay off your total debt faster, you will pay less interest, and you are mathematically efficient.
That is motivation enough. Or is it?
Dave Ramsey believes his “debt snowball” method, in which debts are paid off in the order of balance from lowest to highest, has shown better results than any other method thanks to “quick wins.” If he were to ask his followers if they want to carry their debt longer and pay more interest throughout before offering the “debt snowball” method, they would choose the faster, cheaper, better option of the Debt Avalanche.
One of the many reasons people can fall into debt is the difficulty of separating emotional thinking from rational thinking. The Debt Avalanche helps separate these two methods of thinking, as the best financial decisions are almost always the rational decisions. But it helps to pay attention to some of the psychology involved, as well.
The possible motivation due to the “early success” aspect of the debt snowball method is cited by many followers to be its strongest point, encouraging debt reducers to continue down the path. Followers of the mathematically and financially superior Debt Avalanche, if they need this sort of motivation, can achieve the same effect by defining milestones.
Rather than “celebrating” when your first full credit card or other debt account is paid off, take note and reward yourself when you’ve paid off your first $1,000 (or $500 or $10,000, whatever is applicable to you). Setting and achieving these short term goals influences the same area of the brain (the mesolimbic system) as the act of paying off the first credit card and are similar enough to provide the same motivational results.
Quick wins may help to motivate debt reducers to continue along the path, but the real win comes in knowing you’ve made the smarter choice.
Thursday, May 7, 2009
Lance Armstrong maneuvering to take over Astana
Lance Armstrong maneuvering to take over Astana
By ANDREW DAMPF
AP Sports Writer
VENICE, Italy — Lance Armstrong says he has "high interest" from sponsors if he takes over control of the crisis-hit Astana team.
"Considering the economy and considering global sports sponsorships, if it's the title sponsor on Tiger's bag, or it's stadium rights. It's a tough climate for all that stuff. We've had high interest," Armstrong told The Associated Press on Thursday as he prepared for the start of his first Giro d'Italia.
Armstrong indicated the sponsor would come from a U.S.-based multinational company.
"You're not going to find one in a week and say, 'by the way we need 10 million bucks, please come on.' They don't jump that quick," he added.
Now accepting guesses for "U.S.-based multinational company":
- any tech companies?
Friday, March 20, 2009
Strongbolt for Cobalt RAQs
http://www.osoffice.co.uk/products/strongbolt.html
Strongbolt
The Strongbolt server is built on the popular CentOS operating system. Coupled with this, the ease of use of a Cobalt RaQ is provided by the Open Source version of the cobalt control panel: BlueQuartz. This combination of security, reliability and 'ease of use' provides a 'class A' web hosting solution.
The Strongbolt server is a rom patched, highly tuned operating system inside the sleek blue box that has proven to be reliable and robust for many years. Hosting basic html web pages does not require 2.8GHz Pentium 64bit state-of-the-art hardware, you can run a reliable web server on a Pentium 1 and you would probably not notice the difference in performance!
The appliance integrates the hardware, software, database and development tools needed to deploy applications quickly without any prior server experience. All of these integrated functions translate into an extremely simple system to deploy, use, and manage, all for a total cost well below conventional servers.
The Strongbolt server appliance offers a true plug and play web hosting platform for Internet Service Providers. Millions of web sites are hosted on Sun Cobalt RaQ server appliances worldwide. Furthermore, with an enhanced modular software architecture and a full suite of development tools, the Strongbolt server appliance offers the perfect environment for the developer community to customize and rapidly deploy their applications.