Republished: Managing Change in an IT Environment

Originally published on Benscomputer.no-ip.org Aug 2009.

Whether it's a full network migration, or simply a change of Server Software, it is essential that any change is properly managed. Some changes occur after the IT department suggest them as the best course of action, whilst some are the result of managerial decisions. Whether it's to cut costs, to streamline the workflow or simply to acheive a more stable system, incorrect management of the change can be disastrous.

When preparing for the change, many people make the mistake of only planning for the change itself. You need to go further than that, plan the entire migration period. Create contigency plans, ensure that your disaster recovery plans are suitable for use with the new system.

This article assumes that as part of the change, you are installing a new managed network. If that's not what you are doing, the steps of planning are relevant, it's just that a number of the checks and possible deviations probably aren't.

 


There are a lot of decisions, and questions, involved in planning any change. To show just the tiniest percentage of them, written in a basic format, look at the paragraphs below;

If you are putting new Workstations onto desks, allot some time to make sure everyone clears space on their desk. Have you got enough power and network sockets to run two workstations side by side, if not, how long will the rollout take? Are you going to be sending workers home? Keep the end-user informed. Create a FAQ for users to read, detailing relevant bits of the new system. Do you really want users coming to find you on rollout day, just so that they can complain that the new PC's have Vista, and they're used to XP?

What you will do if it all goes wrong. what could potentially go wrong, and how you will deal with it? There are new workstations being installed but they've not all arrived, which ones are a priority?
One of the network points is faulty, are you going to halt the rollout whilst the socket is fixed, or do you have enough points in the room to use another, at least until the original is fixed.

Data needs to be migrated, users need to cease using their old systems, and begin using the new. How are you going to implement this change? If you do it without notifying users, they may update a document whilst you are transferring it. End result, they lose the changes, and come to you to fix it. Some users may not like the new system, and may continue using the old, how are you going to prevent this, at least until the old Workstations have been removed?

The block above, contains a lot of questions, with no answers. It also contains a tiny fraction of the questions that you would need to ask for any real change. Do you really want your thoughts to look anything like this on the day? Endless questions, with no time for even considering an answer. No, it's far better to do a little bit of planning.

Realistically, there is absolutely no way you can accurately plan a change without some form of paperwork being raised. Buerocratic though it is, planning your change in writing has a number of benefits;

 

  1. You don't have to keep every little detail in your head
  2. If you are unavailable, someone else can see the plan
  3. If things do go south, you can prove that you exercised due diligence
  4. The company is able to keep a record of how the change was supposed to happen
  5. It reduces stress on rollout day, by reducing each step to a checkbox


But, you do need to write the plan properly, and include a lot of possibile likely deviations. A large part of the benefits of the plan comes down to correct formatting. Your plan could be many pages long, and you do not want to have to read the entire thing everytime you need reminding of an item on roll-out day.

So we use a logical format, not that different to the plans you made in science class all those years ago.


Title: My Less Stressful Network migration Plan
Document Date: Today
Document Author: Me



Aim: To migrate our Token ring network, consisting of 80PC's to a new Gigabit network, with 80 replacement PC's.

To Complete by: 30 Days time.




So, we've now defined what the plan is for, and set a deadline. Obviously the dates should be written as dates. Note the very short deadline on this plan, we have 30 days to plan the migration, and to actually do it. This may not seem nearly long enough, but keep in mind that we are not planning the implementation, simply how it will be implemented. If you are at the point where you have a date when replacement PC's will be put in, there should already have been discussions about what suitable replacements would be, where they should go etc.

So we are assuming that you have laid all the groundwork, and know where each new PC will go and that the new network cabling has been installed. All this plan focuses on, is how to get PC's into their locations, and users using the new PC's.

The 30 Days time includes your roll-out day, so it's wise to add a 'Key Dates' section;


Section 1 - Key Dates

  1. 15 Days Time - New Network Hardware Activated
  2. 20 Days Time - New PC's Delivered
  3. 21 Days Time - Contractor arriving to install PC's
  4. 28 Days Time - Contractor arriving to securely destroy old hardware
  5. 30 Days Time - Contractor arriving to remove Token Ring Hardware

 



This means you can, at a glance, confirm when things are happening. It's worth also including a calendar type table in this section, so that you can handwrite new milestones into the plan. These need to be entered onto your electronic copy as soon as possible, but at least if you have one place to write them, they shouldn't be forgotten.


Once you know when key events are happening, you can start planning both for the run-up, the actual date, and any actions arising after the event;

Section 2 - Planning

Date 1 - 15 Days time


Preparation

  1. Ensure that all network points are correctly patched through
  2. Contact Contractor and confirm that there are no delays expected



On the Day

  1. Make contact with Contractor and confirm that all hardware is active, and has been tested



After the event

  1. Check network points are not faulty



Possible Deviations

Deviation Number
Stage
Deviation
Consequences
Preventative Action
Corrective Action
1
Preparation
Not all Points Patched through correctly
Not all new PC's will be able to connect to the network from their designated TAP
Check Patching as planned.
Re-patch erroneous ports
2
Preparation
Delays expected
Hardware will not be activated on the planned date, if delays too great, network will not be active in time for new PC's and data migration.
Confirm timescales with contractors, make them aware of full timeline to highlight the importance of minimising delays
If delay can be rectified in house, do so. If not, then contact relevant parties to try and minimise delays.
3
On the Day
Not all Hardware active, due to fault
If fault is not corrected, substantial sections of the new network may not be active on roll-out day.
None
Liase with contractors to arrange replacement hardware/repairs to hardware.
4
After the event
Some Network points not working.
Some PC's will be unable to connect to network.
Use a CAT-5 tester to check TAPs as part of Preparation stage.
Re-wire/Replace Network sockets.




Create a section like this for each of your key dates, try to include every likely deviation possible. It's impossible to include everything, and there's probably very little justification for adding a row explaining what to do if the Contractor is hit by lightning whilst carrying your new PC's. If anything, this would be covered by a row explaining what to do if a terminal is faulty. Let Health and Safety worry about the Lightning itself!

In the example we are using, there is one deviation that everyone dreads. This appears on Key Date 3, and you should have an entry in the relevant table something like the one below;


Deviation Number
Stage
Deviation
Consequences
Preventative Action
Corrective Action
1
On the Day
Management wish to change the planned location of one or more PC's.
Initial plan no longer reflects the end-result.

Contracts raised may specify where terminals are to be installed, so change may not be possible at this point.

Change at a later date could incur a cost.
Confirm terminal locations with Management at the earliest possible point.

Ascertain whether changes in location are possible under the contract.

Ascertain the cost of moving a terminal at a later date.

Provide Management with both of the above details.
If no movement possible, explain to management that locations have been set in contract, but that they can move terminal locations at a later date - for a cost.

If movement possible, speak to contractor to arrange new locations. Keep a note of the changes and enter into you Project Closure report.



There may be a very good reason why the management want a terminal moved (of course the reason may also not be that good in the light of day), so whilst it can feel a little irritating on the day, work with it. You should have planned for the eventuality, so you should already know how to respond. Odds are high that you are going to receive this kind of request, so make sure you plan for it.

These sections are more or less all you need to create a successful plan, you just need to use some careful thought to envisage as many of the possible deviations as possible. Keep in mind that you will need to plan for errors by the contractor, as well as by yourself and others within the company. Keeping your users well informed of the benefits and issues they will face (don't descend into techy speak!) is well advised. Publishing an FAQ on the intranet may seem like a waste of time, but if you make users aware of it, they are less likely to waste both yours and your colleagues time by asking mundane questions.

No-one wants to be answering questions on why they've chosen a specific version of Windows whilst they are trying to install new terminals, or worse work out what impact the Managerial changes will have.

Similarly, keep the Management informed, the earlier they can propose changes, the less drastic the consequences are likely to be for your project. Pass them a copy of your planning document, it shows that you are taking the project seriously, and may also answer some of the questions they were wanting to ask.

Unfortunately, not every project can go smoothly, but a lot of problems often boil down to bad planning or bad communication. There will always be the risk of that deviation that you could never even imagine happening, but these should be very rare, as a rule planning can only benefit your roll-out.

Despite how it can feel whilst creating the document, project planning is far from being a waste of time!

New Bug Reporting System online

This content was originally published to benscomputer.no-ip.org

The original system was CGI based and actually looked quite nice. The bastardisation of the interface seen in today's static mirror came much later, after it was rewritten in PHP


Although there's not been many updates posted on the site recently, there has been a lot of work going on in the background. Although there are a number of projects available for download on the site, to date there's been no easy way to report Bugs and errrors.

This has now been rectified in the form of BUGGER. The functionality is limited at the moment, you need to know the Bug ID in order to load information on it, but improvements are planned in the very near future. The system is currently taking precedence over Claims_DB as there is obviously very little point in further development until we can receive some feedback on the system.

The name is simple to understand, it's BUGGER as in - The systems broken! Have you BUGGERed it?

If anyone wants to take a look at the various interfaces, there is a bug currently reported in the system with Bug ID 1. You can also access it via this link. BUGGER will have the Captcha Project integrated into it's interface very soon, but for the time being there is a basic Captcha style test.

Republished: Basic Malware Detector For Linux

This was originally published on benscomputer.no-ip.org in Jun 2009

OK, if this of use to anyone then fantastic!!!!

It's a simple script that will generate MD5/SHA1/SHA256 sums of all files within your PATH. This is based on the PATH variable on my machine at time of writing, in fact it also checks the sums of my backups (you'll probably want to remove the /mnt/exthd line).



Its simple to use, all you need to do is burn the generated disc image to a CD for use when you check your system. It is based on the idea that you trust the security of your system at the time of generation, and there are a few caveats:


  1. Must be run as root (you can run as a normal user, but will get a lot of Permission Denieds)
  2. Won't notice if new executables appear (to be changed at a later date, maybe!)
  3. You must burn the disc image (if you leave it on the system, and it's compromised, the attacker could regenerate your image)


Preparation

There are a couple of steps before you can get the script working. You'll need nothing more than a text editor!

  1. You need to specify the checksum program to use (default is sha256sum)
  2. You may want to change the directories that are checksummed

Usage

Calling just the script, or using --help will display usage options. Despite what is shown, all that is currently supported is

sha_archive.sh --full
sha_archive.sh --help

using the first will generate a checksum of every file stored within the directories specified within the script, which will then be stored in an ISO image along with the verification script. This should be burnt to a CD immediately.

Upon mounting the CD (to run your check), cd into the mounted directory and run

./Verify_sigs.sh

which will then check all files stored within it's database. It will provide you with a prompt before it goes away, read it carefully and then press enter.
Should any discrepancies be found, they will be piped through less, but the file will remain in /tmp

Bugs

Probably quite a few


Releases

V0.1
MD5 Sum

713f63b9323cfa7453d5aeb279de9b83 sha_archive.sh

Republished: Phorm launches the InPhorm Newsletter

Originally published on Benscomputer.no-ip.org 29 June 2009

In a casual spare moment I clicked onto Phorms Website, once I got past the vomit evoking mess that is the Webwise Discover advert page, I noticed that there has been a bit of a shake-up since I last visited.

I certainly don't remember them having a newsletter called InPhorm (Will the play on words ever wear thin?) so I figured I'd give it a browse. Once again, Phorm are seemingly desperate to be viewed as no different to Google. Love them or hate them, there's no denying that Google is something completely different. Google isn't talking about putting kit into my ISP in order to analyse almost every aspect of my datastream, and Google is reasonably easy to avoid if you wish.

Read the following tidbit from the newsletter

Undoubtedly the most important development is the unveiling of Webwise Discover. For over a year, Phorm has been in the rather unusual position of being evaluated more on our revenue model than our consumer proposition. It's similar to talking about Google showing you advertising, while leaving out the search part. But no more.

The fact is our technology will allow ISPs to partner with websites to create a unique consumer experience; and Webwise Discover is a perfect example of this. The result of several years development, we have launched a technology that, rather than simply being a better website, has the ability to make all websites better.

Webwise Discover is the ultimate recommendation engine. All you'll need to do to get personalised content automatically is to browse the internet - no boxes to check, no forms to fill in. Just show up at any participating site and it will show you stuff that is right for you. It's the simple way to get a more interesting and useful browsing experience.

One of the major concerns that everyone has with the technology is detailed in that second paragraph. All you have to do to send Phorm your datastream is browse the internet, no horrible checkboxes to tick. Unless of course, you don't want Phorm knowing which sites you visit and when. Claims of anonymity are fine, but where's the proof? It's all dependant on trust, there's nothing to stop BT taking a peek at everything I visit, but the point is I (just about) trust them not to do it.

I can't say the same for 121Media (lets call a turd a turd shall we?). How can I trust a company that previously created malware (they claim adware, but it's still a turd) with something so precious as my privacy? Are you willing to allow a company that once allowed its software to be installed onto an unwitting users computer to analyze your traffic? All in order to serve you 'more relevant' ads? Because lets face it, that is what they are about, anti-phishing comes as standard in most modern browsers, and Discover appears to be something of a shortlived novelty.

So it'll provide you with links to websites that it believes are relevant to you, but aren't you smart enough to find most of them anyway? So far, I've yet to see anything to suggest that the websites it will offer will include any not using the OIX advertising network. So, to put it another way, they will provide a nice polished turd to point you towards sites containing more of their 'more relevant' adverts.

Perhaps I'm wrong, maybe Discover will list websites not affiliated with the OIX network (I expect to see a heavily edited quote on StopPhoulPlay if I am), but it doesn't change the underlying realities. This is a company determined to make money, and their past behaviour suggests that they don't have too many scruples about how they do it. Advertising is their game, and the advertising world is a tough game, it could be just too tempting to remove the anonymity filter from the system. Who would know? Not BT and not us, or at least not until far too late.

Now a fantastically optimistic paragraph says the following

Consumers in the market research we conducted responded with a level of enthusiasm which leaves no doubt as to the reception Webwise Discover will have as it is deployed. As part of our launch activities, we also held an evening reception with many of the top websites in the UK. The response was virtually the same everywhere: we like this and see how it creates value for us.

Which is absolutely great, until you take this into account. That's right, Phorm didn't exactly mention DPI in the survey. Perhaps because we wouldn't understand it, poor cretins that we are. The problem for Phorm is that those who have the basics explained to them don't like the idea. Those who understand DPI on a deeper level hate the idea.

I mentioned recently that I hadn't spoken to anyone who didn't like the idea of Phorm once it was explained to them, that has now changed. I mention it because honesty is an important part of a balanced argument. This person was not against it, but also wasn't for it. The view was that as the internet was not used very much, she didn't think there were any privacy implications for her. Frankly, that's about the best that Phorm can hope for!

Although it isn't mentioned in the News letter, Phorm do have a link leading to this article by the ISBA - 'The Voice of British Advertisers' - which calls the EU's actions over Phorm a bit premature (exact words are - EU is getting ahead of itself). I think this is supposed to be taken as support for Phorms position, which it is, but it's hardly surprising. An Advertising Body in support of a company who believe they can increase advertising revenue. Hmmmmmm...... did not see that coming!

ISBA, the voice of British advertisers, says concerns about the new technology ? which can help refine and personalise the advertising content received by online consumers ? ?can and should be addressed by the UK?s successful system of advertising self-regulation.?

I'm no expert in the advertising field, but I'd say the issues raised have bugger all to do with the advertising itself. In fact, I'd say the whole debate centres on the underlying technology, the adverts are just the end results. Most people tolerate adverts on the net, they are an unfortunate necessity, but that's not the same as saying it's OK to track our every move on the net.

I'm assuming that the "UK's successful system of advertising self-regulation" refers to the Advertising Standards Authority. How they factor into the debate is unclear, OK they do deal with the placement of adverts as well as the content, but I doubt their remit extends to the current debate. In fact, I'd go further than that. Being an agency funded by the very people the regulate, I'd say they have absolutely no business making any form of decision about what does and does not get placed into a telephone exchange. I doubt that anyone at the ASA is qualified to understand the technology, and I doubt anyone could believe that there would be no bias in their decision.

The technology has already been given the green light by the Information Commissioner?s Office, the UK?s data watchdog. And earlier this year, as an example of the strength of the self-regulatory system, the Internet Advertising Bureau, in consultation with industry bodies including ISBA, published its Good Practice Principles for behavioural targeting. Ten businesses have initially committed themselves to the principles, including Google, Microsoft, Platform A, Yahoo! and Phorm.

The fact that the ICO cleared such an unpopular technology is one of the reasons that the EU is involved. You cannot use the cause as a defence against the cause itself, that would lead to a paradoxial world. Whilst the ISBA may have published it's guidelines for Good Practice Principles, it still fails to address the underlying issue. And for those who have forgotten, it is this;

Phorm want to read every page you read, and then make a note of anything of interest therein. They promise not to record that it was you that read it, simply that your number viewed that category.

A world where adverts are worth more money, don't think anyone can claim that the ISBA doesn't have a vested interest in this one.

I'll post anymore of Phorms astounding tidbits as and when I find them.

ClaimsDB Listener: V0.2

This content was originally published on benscomputer.no-ip.org

To coincide with the new functionality of Claims_DB a new version of the Claims_DB_listener script is available. You can grab it from here.

There are of course some new METHOD's available to serve your requests, but the previous functionality remains unchanged. On the TODO list is seeing if the speed of certain transactions can be sped up by means of caching. This is probably quite a way off given the work involved in making sure that the script only every uses a cached version when the Database response will not have changed.

In order to learn about the new METHODs you'll probably want to read the updated Protocol Docs. You can also view the changelog.

ClaimsDB: v0.2

This content was originally published on benscomputer.no-ip.org

As promised, Claims_DB V0.2 is officially released, there have been a few changes in the latest development. These include;

  • Implementation of new functions
  • Prevention of a Data Clashes in Multiuser systems
  • Activation of record deletion function

The system still uses temporary files but uses its PID to try and make the filename unique. It's still not fantastic at tidying up behind it, but it's getting there. If you want a faster response, it may be worth configuring the system to use a RAM disk as its temporary location.

The temporary file location is now defined in /etc/claims_db.conf allowing for quick configuration throughout the system.

There are also some minor bugfixes, see the changelog for more details.

This is an incremental release so please replace the components with the ones listed here, if you have not yet installed the system, please grab the V0.1 tarball and replace its program files with these.

A full release will be made at the next release stage!

As ever, any bug reports are welcome. Please leave them on the Contact Me Page

Files

CLAIMSROOT/bin/read_records_claims.sh
CLAIMSROOT/bin/insert_record_claims.sh
/etc/claims_db.conf
CLAIMSROOT/Docs/Protocol_Documentation


ClaimsDB Listener: V0.1

This content was originally published to benscomputer.no-ip.org


As promised previously, Claims_DB has become net aware. Technically it's still not net aware, but I've created a 'listener' script for it. The basic impact of this is that you will be able to host your database on a seperate system to the front-end.

The listener script is actually a CGI script, so you will need a HTTP daemon running on the database server, but with the likes of LightHTTPD using so little memory, it felt a bit pointless writing an actual daemon. There is an argument that running a HTTP daemon on the server un-necessarily provides another attack vector and increases overhead, but it's apples and pears.

I lack the time that I'd want to ensure that any listening network daemon was secure, so it seems better to allow the end-user to choose which service they want to run. As long as the HTTP server supports CGI's and you have BASH on the system, this script should work for you.

You can grab V0.1 of the script here.

To understand how to serve the system requests, take a look at the Protocol Documentation

It's being run as a seperate project so that I can focus on getting the functionality of Claims_DB up to scratch, it's likely that the listener script will lag behind the main project a little most of the time (although it does currently support functionality that I haven't finished implementing yet!)

Writing a Front End for Claims_DB Part 6 - Closing Notes

This content was originally posted to benscomputer.no-ip.org

Writing a Front End for Claims_DB - Part 5 - gen_report.sh

In this tutorial we have inserted a Database into the Claims_DB system and created a simple front end, users can add records and run a pre-defined query.

There are however a few caveats within this tutorial, and within the Claims_DB system itself.

Because of the use of temporary files within the processing functions of this tutorial, only one user can access your system at one time. Two simultaneous connections could lead to a data clash, this is currently also a problem within Claims_DB itself, and one that is being given a high priority for the next release.

If you are writing front ends for yourself, it is advised that you devise a way to avoid a clash of temporary files within the front-end, this could be implemented by appending the remote hostname to the filename of the temporary file, or simply by loading everything into variables instead.

If you can code around this issue, then when the next release of the database engine is made you should have a functioning multi-user system.

You will also note that we did not provide a front-end page for the removal or editing of records. This is because the engine doesn't yet support that functionality very well, although it is improving. It is possible to DROP a record, or edit it from a front-end but it is a dangerous functionality.

If you require this functionality try something along the lines of the following

#!/bin/bash
#

# Deletes a record from the table - DANGEROUS
#
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See http://benscomputer.no-ip.org/LICENSE for details

DBNAME="Sample_DB"

# Get your QueryString and store it in the variable COL1
# Store the Column that your QueryString appears in in COLNUMBER
# Best to only use Primary Keys for this code

DBROOT="$DBNAME" TABLE="Catalogue" "$CLAIMS_PROGS"bin/read_records_claims.sh -query-line "$COL1" $COLNUMBER > /tmp/LINENO.tmp
LINE_NUMBER=$( cat /tmp/LINENO.tmp )
READ_LINE=$( sed -n "$LINE_NUMBER"p "$CLAIMS_PROGS"/db/"$DBNAME"/Catalogue.csv )
cat "$CLAIMS_PROGS"/db/"$DBNAME"/Catalogue.csv | grep -v "$READ_LINE" > /tmp/CLAIMSTMP.db
cat /tmp/CLAIMSTMP.db > "$CLAIMS_PROGS"/db/"$DBNAME"/Catalogue.csv

Take good note of the comments, and test any implementation very carefully! It would be very easy to hose the entire table this way. If you were using this method to edit a record, you would then call Claims_DB as if you were adding the record for the first time.

There is a great deal more that you can do, the query-line call above is very useful if you wish to create a form or report to find a record by its Primary Key.

You should also note that the GET method of submitting forms is only really useful for small forms, both browsers and Webservers can only accept URL's of a limited length, so large forms, or forms with a lot of input will not work. Internet Explorer has a particularly short length, so if you implement using the GET method and the submit button doesn't seem to work, try re-submitting from Firefox or Opera.
For long forms use the POST method, though you will probably find it easier to parse POST data using Perl or PHP.

Be aware that long reports can take quite some time to generate, if anyone can find a faster method than the one shown on the previous page be sure to let me know!!!!

To make these scripts 'live' place them into your webservers CGI folder, and ensure that they are executable by whatever user your server daemon runs as.

A tarball containing copies of all the scripts we've written today can be obtained from here.


Writing a Front End for Claims_DB Part 5 - gen_report.sh

This content was originally published to Benscomputer.no-ip.org


Tutorial Part 4 - add_process.sh


So far in this tutorial we have created a landing page, and a means for users to add records to the system. What we havent done yet is created a way for them to retrieve data, worse we've teased them by putting an option on the landing page that currently does nothing.

Well now we are going to create the script that is supposed to sit on the end of that button. So without further ado, lets create our headers

nano gen_report.sh

#!/bin/bash
#
# gen_report.sh
#
# Part of the Claims_DB Sample Database Frontend
# Generates a report of all stock for a given location on the bike
#
# Copyright Ben Tasker 2009
#
# Released under the GNU GPL
# See http://benscomputer.no-ip.org/LICENSE


# Set yer variables here

CLAIMS_PROGS="/home/ben/programs/Claims_DB/"
DBNAME="Sample_DB"

# Tell the browser our MIME Type
echo Content-type:
text/html
echo ""

Now we want to grab the Request URI to find out which query criteria the user selected.

# Grab the Query Criteria from the Request URI
REQUEST_URI=$( /bin/env | grep "REQUEST_URI" )

# Seperate out the Criteria
QUERYSTRINGA=$( echo "$REQUEST_URI" | sed -n 's/^.*CRITERIA=\([^&]*\).*$/\1/p' )

# Remove URL Encoding
QUERYSTRING=$( echo "$QUERYSTRINGA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
Now we know what the query string was, we can make the query and retrieve the data needed to generate our report.

# Get the Column Headers

DBROOT="$DBNAME" TABLE="Catalogue" "$CLAIMS_PROGS"/bin/read_records_claims.sh -headers > /tmp/REPHEAD.tmp

# Run the Query

DBROOT="$DBNAME" TABLE="Claims" "$CLAIMS_PROGS"/bin/read_records_claims.sh -query "$QUERYSTRING" 2 > 
/tmp/REBBOD.tmp

# Generate the Title Row

COLT1=$( awk -F\, '{print $1}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT2=$( awk -F\, '{print $2}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT3=$( awk -F\, '{print $3}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT4=$( awk -F\, '{print $4}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT5=$( awk -F\, '{print $5}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT6=$( awk -F\, '{print $6}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT7=$( awk -F\, '{print $7}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT8=$( awk -F\, '{print $8}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT9=$( awk -F\, '{print $9}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT10=$( awk -F\, '{print $10}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT11=$( awk -F\, '{print $11}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT12=$( awk -F\, '{print $12}' /tmp/REPHEAD.tmp | sed 's/\"//g' )

We can now start by creating the HTML, ready to begin inserting data. You'll notice a reference to a CSS file on this server, this contains formatting to try and ensure that if the report is printed it will not expand over more than one page wide. Click here to view the CSS file.

# Start Generating the HTML

DATESTAMP=$( date +'%A %d %B %Y' )

/bin/cat << EOM


<html><head>
<title>Parts for location $QUERYSTRING</title>
<link rel="StyleSheet" href="/images/benscomputer_no-ip_org_Archive/stylesheets/print_report_land.css" type="text/css" 
media="print">
</head>
<body bgcolor="white">
<b><font size="+1"><font size="+12">Parts for location 
$QUERYSTRING</b></font></font><br>
<br><br>
<font size="+1">
$DATESTAMP</font>
<br>
<hr>

<table style="width: 100%; text-align: left;" border="0" cellpadding="2"
cellspacing="2">
<tbody><tr>
<td style="vertical-align: top; text-align: center;">
<b>$COLT1</b>
</b>
</td>

<td style="vertical-align: top; text-align: center;">
<b>$COLT2</b>
</td>

<td style="vertical-align: top; text-align: center;">
<b>$COLT3</b>
</td>

<td style="vertical-align: top; text-align: center;">
<b>$COLT4</b>
</td>


<td style="vertical-align: top; text-align: center;">
<b>$COLT5</b>
</td>


<td style="vertical-align: top; text-align: center;">
<b>$COLT6</b>
</td>


<td style="vertical-align: top; text-align: center;">
<b>$COLT7</b>
</td>


<td style="vertical-align: top; text-align: center;">
<b>$COLT8</b>
</td>


<td style="vertical-align: top; text-align: center;">
<b>$COLT9</b>
</td>


<td style="vertical-align: top; text-align: center;">
<b>$COLT10</b>

</td></tr>
<tr><td colspan="13"><hr></td></tr>
EOM

What we have at this point looks a little something like this;


Parts for Location (whatever the user queried)

Saturday 20 June 2009

SN
Description
Part Location
Part Number
Number Type
Stockist
Qty in Pack
Qty on bike
Rating
Bike




The data will then be placed below this row by row. So in order to achieve this we need to do the following;

# Populate the Table with data, but sort it first

cat /tmp/REBBOD.tmp | awk -F"," '{print $1"," $2"," $3"," $4"," $5"," $6"," $7"," $8"," $9"," $10","}' | sort -n -k 1 > /tmp/REBBOD.sorted

# Remove the old unsorted temp file
rm -f /tmp/REBBOD.tmp


# Go through Row by Row
while read -r a
do
#Extract each column

COLT1=$( echo "$a" | awk -F\, '{print $1}' | sed 's/\"//g' )
COLT2=$( echo "$a" | awk -F\, '{print $2}' | sed 's/\"//g' )
COLT3=$( echo "$a" | awk -F\, '{print $3}' | sed 's/\"//g' )
COLT4=$( echo "$a" | awk -F\, '{print $4}' | sed 's/\"//g' )
COLT5=$( echo "$a" | awk -F\, '{print $5}' | sed 's/\"//g' )
COLT6=$( echo "$a" | awk -F\, '{print $6}' | sed 's/\"//g' )
COLT7=$( echo "$a" | awk -F\, '{print $7}' | sed 's/\"//g' )
COLT8=$( echo "$a" | awk -F\, '{print $8}' | sed 's/\"//g' )
COLT9=$( echo "$a" | awk -F\, '{print $9}' | sed 's/\"//g' )
COLT10=$( echo "$a" | awk -F\, '{print $10}' | sed 's/\"//g' )

# Generate the HTML

/bin/cat << EOM

<tr>
<td style="vertical-align: top; text-align: center;">
$COLT1
</b>
</td>

<td style="vertical-align: top; text-align: center;">
$COLT2
</td>

<td style="vertical-align: top; text-align: center;">
$COLT3
</td>

<td style="vertical-align: top; text-align: center;">
$COLT4
</td>


<td style="vertical-align: top; text-align: center;">
$COLT5
</td>


<td style="vertical-align: top; text-align: center;">
$COLT6
</td>


<td style="vertical-align: top; text-align: center;">
$COLT7
</td>


<td style="vertical-align: top; text-align: center;">
$COLT8
</td>


<td style="vertical-align: top; text-align: center;">
$COLT9
</td>


<td style="vertical-align: top; text-align: center;">
$COLT10

</td>

</tr>
EOM


done < /tmp/REBBOD.sorted

rm -f /tmp/REBBOD.sorted 2> /dev/null

And then finish off

# generate the closing HTML
/bin/cat << EOM
</tbody></table><br>
<hr>
</body>
</html>
EOM

# We're finished!
exit
So the finished result should look something like this;


Parts for Location (whatever the user queried)

Saturday 20 June 2009

SN
Description
Part Location
Part Number
Number Type
Stockist
Qty in Pack
Qty on bike
Rating
Bike


1
Somepart
Location
12345
OEM
N/A
1
2
N/A
SV650-S
5
Someotherpart
Location
632458
Stockist
FavShop
2
1
10W
SV650-S
14
Anotherpart
Location
98541652
OEM
N/A
1
4
N/A
SV650-S



And so long as the users browser supports CSS when they try to print the report, it should fit onto the width of a Landscape A4 sheet. They may have to set the pagination in Page Setup though as some browsers don't seem to support that setting.

The source for this section can be downloaded from here.

Tutorial Part 6 - Closing Notes



Writing a Front End for Claims_DB Part 4 - add_process.sh

This content was originally published to benscomputer.no-ip.org

Tutorial Part 3 - add_record.sh

In the previous section of this tutorial we created a page allowing a user to insert a record into the database. Now we need to create the script that actually processes the input and inserts the record into the database.

This script will be the first time we have called insert_records_claim.

As ever we need to start by generating our script headers

nano add_process.sh

#!/bin/bash
#
# add_process.sh
#
# Part of the Claims_DB Sample DB Front End
#
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See http://benscomputer.no-ip.org/LICENSE
#

# Define your variables here

# Where did you install Claims_DB?
CLAIMS_PROGS="/home/ben/programs/Claims_DB/"

# The name of the Database we are using
DBNAME="Sample_DB"



# Tell the browser our MIME Type
echo Content-type: text/html
echo ""


This time we need to process the details sent to us. As the form method in the previous page was set to use GET we can grab this from the Request URI.

# Run /bin/env to get environment 
details
REQUEST_URI=$( /bin/env | grep "REQUEST_URI" )

# Now we need to seperate out each of the elements.

# Primary Key
RECORDID=$( echo "$REQUEST_URI" | sed -n 's/^.*RecordID=\([^&]*\).*$/\1/p' )

# Description
DESCRIPTION=$( echo "$REQUEST_URI" | sed -n 's/^.*Desc=\([^&]*\).*$/\1/p' )

# Part Location
LOCATION=$( echo "$REQUEST_URI" | sed -n 's/^.*Locat=\([^&]*\).*$/\1/p' )

# Part Number
PARTNO=$( echo "$REQUEST_URI" | sed -n 's/^.*Partno=\([^&]*\).*$/\1/p' )

# Number Type
NUMBERTYPE=$( echo "$REQUEST_URI" | sed -n 's/^.*NoType=\([^&]*\).*$/\1/p' )

# STOCKIST
STOCKIST=$( echo "$REQUEST_URI" | sed -n 's/^.*Stkis=\([^&]*\).*$/\1/p' )

# Quantity in pack
PPQ=$( echo "$REQUEST_URI" | sed -n 's/^.*PPQ=\([^&]*\).*$/\1/p' )

# Quantity on bike
QOB=$( echo "$REQUEST_URI" | sed -n 's/^.*Need=\([^&]*\).*$/\1/p' )

# Rating
RATING=$( echo "$REQUEST_URI" | sed -n 's/^.*Rate=\([^&]*\).*$/\1/p' )

# Bike
BIKE=$( echo "$REQUEST_URI" | sed -n 's/^.*bike=\([^&]*\).*$/\1/p' )

Now although we have all the information we need, we probably don't want to insert it into the database yet. The information is currently URL Encoded, and Number type still uses the coding that we gave it on the last form.

We probably could insert it into the database like this and then remove the encoding when we retrieve the information at a later date, but it does mean that should we open the table in a spreadsheet program it's going to be more or less useless to us. Let's be tidy and do the processing now, starting with the number type.

# Translate NUMBERTYPE into the value to be inserted 
into the database

if [ "$NUMBERTYPE" == "0" ]
then
NUMBERTYPE="OEM"
else
NUMBERTYPE="Stockist"
fi
Next we'll remove URL Encoding from the variables that are likely to contain it. NUMBERTYPE won't as we have just set it, PPQ and QOB should both be numeric so they don't need changing. Everything else could contain spaces or characters, so lets run it through a list of all likely possibilities. (Note: Each of the following should occupy one line)

DESCRIPTION=$( echo "$DESCRIPTIONA" | sed 
"s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed 
"s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed 
"s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed 
"s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed 
"s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
LOCATION=$( echo "$LOCATIONA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
PARTNO=$( echo "$PARTNOA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
STOCKIST=$( echo "$STOCKISTA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
RATING=$( echo "$RATINGA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
BIKE=$( echo "$BIKEA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
All our variables should now be ready for insertion into the database, so lets put them in.
# Insert the record into the 
DatabaseCOL1="$RECORDID" COL2="$DESCRIPTION" COL3="$LOCATION" COL4="$PARTNO" COL5="$NUMBERTYPE" COL6="$STOCKIST" 
COL7="$PPQ" COL8="$NEED" COL9="$RATING" COL10="$BIKE" DBROOT="$DBNAME" TABLE="Catalogue" 
"$CLAIMS_PROGS"/bin/insert_record_claims.sh -insert > /tmp/STATUS.tmp

STATCODE=$( cat /tmp/STATUS.tmp )

Now we are going to generate some HTML to tell the user whether we were successful or not. To save a few lines of code, we'll generate the HTML header and then process the event.

# Generate the HTML header
/bin/cat << EOM
<html><head><title>Claims_DB Sample Database Front End</title></head>
<body bgcolor="white><center>Claims DB Sample Database Front End - Add Record Stage 
2</center><br><br>
EOM
Now lets check whether the insertion was successful. If it was then Claims_DB will have returned the status "SUCCESS", if it returns anything else then something went wrong.

# Check whether it worked
if [ "$STATCODE" != "SUCCESS" ]
then
# Something went wrong
/bin/cat << EOM
Your record was <b><font color="red">NOT</font></b> inserted into the Database due to an 
error. The status code returned by the database was<br>
$STATCODE<br><br>
Please report this error to your Database Administrator.<br><br>
Sorry for the inconvenience.<br><br>
<a href="/Index.sh">Return to the Index Page</a>
EOM else # It worked! /bin/cat << EOM

Your record <b><font color="green">WAS</font></b> inserted into the database.<br><br>
<a href="/Index.sh">Return to the Index Page</a>
<br><br>
EOM fi # Generate the HTML footer /bin/cat << EOM </body></html> EOM # Tidy up and exit - make sure no errors are sent to the browser rm -f /tmp/STATUS.tmp 2> /dev/null
exit

With that we have a script that process the input sent to it from our add_records form. Again, aesthetically not the greatest work in the world, but the necessary functionality is there for you to expand upon.

You can download a copy of this source here.


Tutorial Part 5 - gen_report.sh