Simple Automated Twitter Updates

There are several Twitter accounts, which offer a regular stream of Quotes on a variety of topics. Examples are @iwisenet and @tweetsayings. Using a few tools available on my Linux distribution I’ve created a very simple automated setup for regularly sending tweets from a Twitter account. There are likely a myriad of software products available for doing this a bit more professionally but I though some of you might be interested in something you can do from own PC, assuming you have the appropriate tools and like playing around.

There are only three requirements:

  • A means of storing and retrieving the tweets
  • A means of automated processing
  • A means of posting the tweets to your Twitter account

I used the following tools which are avalalble in most Linux distributions but other tools could be used instead.

  • For storing the tweets – MySql
  • For automated processing – Cron and Bash Shell
  • For posting the tweets – the Twitter Command Line API and curl

Step One – Setup a MySql Table

CREATE TABLE texts
lastused timestamp NOT NULL default '0000-00-00 00:00:00' on update CURRENT_TIMESTAMP,
usagecount int(11) NOT NULL default '0',
saying varchar(140) collate utf8_unicode_ci NOT NULL COMMENT 'Twitter Status Text',
KEY usagecount (usagecount),
KEY lastused (lastused)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

This table stores the tweets. The column ‘lastused’ is automatically updated by the DB upon an update operation. This gives you the ability to cycle through the tweets when they are retrieved with the appropriate query (see below)

Step Two – The Cron Job

30 14 * * * /twitterupdate.sh

This job updates the twitter status once per day. Vary the frequency by adjusting the crontable parameters.

Step Three – The Shell Script

#!/bin/bash
##
## Bash cron job for updating a twitter status using curl, the twitter API and MySql Table
##
## Requirements:
## - cron daemon
## - bash shell
## - curl
## - MySql
##
## (Note: Twitter Status refers to the actual 'tweet' that is displayed for the account)
##
## Operational Description: The MySql Table contains a listing of Status Updates for the twitter
## account. The frequency of the updates is determined by the settings for the cron job, which
## must call this shell script. This script performs the following tasks
## - read one Status Update Text from the DB. The status text which has not been used
## for the longest time is selected.
## - An update is performed on the DB for this record incrementing the usage count and last used
## timestamp.
## - The text is sent to the twitter API using curl to perform an update of the account status.
##
## ------------------------------------------------------------------------------------------------
##
## Variables
twitterUsername="<twitter user name>"
twitterPassword="<twitter password>"
mysqlDbName="<MySql DB Name>"
mysqlUsername="<MySql User>"
mysqlPassword="<MySql Password>"
selectQuery="select saying from texts order by lastused, usagecount limit 1;"
updateQuery="update texts set usagecount = usagecount + 1 order by lastused, usagecount limit 1;"
##
## Retrieve Next Text from the DB
## ------------------------------
echo $selectQuery > query.sql
statusText=`/usr/bin/mysql -s --user="$mysqlUsername" --password="$mysqlPassword" "$mysqlDbName" < query.sql`
rm query.sql
##
## Update Text usage counter
## -------------------------
echo $updateQuery > query.sql
/usr/bin/mysql -s --user="$mysqlUsername" --password="$mysqlPassword" "$mysqlDbName" < query.sql
rm query.sql
##
## Update Twitter Status
## ---------------------
/usr/bin/curl --basic --user $twitterUsername:$twitterPassword --data status="$statusText" http://twitter.com/statuses/update.xml >/dev/null 2>&1

This script performs all the steps necessary. With a little tweaking other tools could be used. It would certainly be possible to do this with a plain text file and eliminate the need for MySql but that would require another script and would be a bit more involved. Another command line URL utility besides curl is also conceivable but that is what Twitter recommended and I have curl so I didn’t bother looking for alternatives.

Lot’s of fun …

Share

PHP Installation with IIS

Recently, I had problems installing PHP with the Windows IIS server. I was installing PHP 5.2.1 on a Windows 98 machine running IIS 5.1. The PHP website offers the option of using a Windows installer or doing a manual installation using a ZIP file. My recommendation is to use the Zip-File. I had installed PHP using the Windows Installer and indeed everything was comfortable, went quickly and seemed to be OK. Yet, when I tried to run a PHP program I had many problems. Many times it worked but many times I got an Html-Error 500 in addition to many other errors. At times it the PHP program would work and then suddenly an error. It was one of those nasty bugs which is not directly reproduceable but occurs somewhat randomly. After much googling I installed PHP using the ZIP-file and the php.ini-recommeded file as a starting point. And it was like magic, no problems whatsoever. Again avoid the windows installer, do it manually.

Share

The Ajax Caching Trap

A problem that frequently occurs when using Javascript to perform an Ajax call is that the data returned to the client from the server via an Ajax call is not current but cached. A sympton of this effect is that the returned data does not change from call to call but remain constant in spite of the fact that the data on the server side has indeed changed. This problem occurs when the URL String for subsequnt Ajax calls is constant. Some browsers cache the returned contents of the Ajax call. If the URL is the same as the previous call an actual request to the server is not performed and the browser returns the cached contents from the previous call.

Another symptom is that this problem persists even when one adjusts the header values returned by the ajax call. For instance I had the following header values which were to be returned via a PHP Ajax server routine.

header('Content-Type: text/xml');
echo $xml_string;

After much googling I changed to PHP code so that it returned headers which should have informed the client that the returned data should not be cached but be re-read every time. The code looked like this.

header('Content-Type: text/xml');
$now = gmdate('D, d M Y H:i:s') . ' GMT';
header("Expires: $now");
header("Last-Modified: $now");
header("Cache-Control: no-cache, must-revalidate");
header("Pragma: no-cache");
echo $xml_string;

Unfortunately this had no effect. The problem persisted and after spending many hours banging my head against the wall trying to determine what was going on, I came across a solution. After many hours of googling I found a solution which solves this problem by using a clever trick. Perhaps this small tip can spare some of you the agony that I endured. An example of a typical Ajax call in Javascript to a Server routine looks something like this

xmlhttp.open("GET", "ajaxservice.php?paramtervalue=2", true);
xmlhttp.onreadystatechange=ajaxhandlerfunction;
xmlhttp.send(null);

Even if the the data on the server side change from call to call some browsers will not return the changed data from the server but rather the cahced data stored by the browser on the client. Ths solution is to add an extra dummy parameter to the URL in the xmlhttp.open call which makes the subsequent URL to differ from the previous call. This makes the browser think that URL has changed and forces it to actually perform a call to the server and retrieve the updated data. A widely used solution is to append a random number to the URL as a dummy parameter. This has no effect on server side since the parameter value is ignored. An example would be the following.

xmlhttp.open("GET", "ajaxservice.php?paramtervalue=2&dummy="+Math.random(), true);
xmlhttp.onreadystatechange=ajaxhandlerfunction;
xmlhttp.send(null);

This solution is not only relevant for the URL in the Ajax call but can also be used in later references to data on the server. In one situation I had a constellation in which the call to the Ajax server changed an image file on the server so that it had different contents albeit using the same name. The client referenced this binary file directly using the following code.

document.getElementById("dynamicimage").src="ajaxbild.jpg";

This approached suffered from the same caching phenomenon as the previous examples of the Ajax call using xmlhttp.open. It was solved by using the same technique, i.e. using a dummy parameter to trick the browser ionto thinking that it must perform a call to the server to obtain the data. The solution is similar.

document.getElementById("dynamicimage").src="ajaxbild.jpg?"+Math.random();
Share

Computers Never Get Faster

Back in the digital Mesolithic era (i.e. the 1980’s), I can remember very frustrating times waiting for the computer to do its thing. Painful things like performing calculations, reading floppies and most of all compiling programs. Many times I would hit the return key and go off for a coffee break. Who hasn’t done this?

One of the funniest software advertisements I have ever seen, was a picture of a skeleton full of cobwebs sitting in front of his computer monitor. On the monitor the message read "compiling, please wait ..."

But wait a minute, that was back in the 80’s. Those were the times when 8 Mhz processors and 10 MB hard drives were top of the line. The PCs of the current generation outperform those old museum exhibits a thousand-fold, don’t they?

My answer is NO!!! I repeat NO!! In fact I assert that in the last twenty years the speed of the average PC has changed very little.

To show that I have not flipped my lid, I’d like to explain why this is so. The key is in how one defines “speed”. In the virtual (i.e non-real) world computers speeds have been constantly increasing. This trend will presumably hold for the foreseeable future. At the same time back in the real world the average users sits and waits just as long for his/her computer to do its thing as he/she did twenty years ago. Back in Mesolithic times I would turn on my PC in the morning and then go get a coffee and a toast. By the time I got back upstairs the computer was finishing up with its booting process and I could get to work. Today I DO THE THE SAME THING. The computer is faster it can do more and it does do more, but in terms of real time, that is actual minutes and seconds, there has been little increase in speed.

Why is this so? The technical answer is that the computers nowadays are performing much much more than their counterparts from the past. Back then ASCII displays were the standard. Printing a line on the monitor was equivalent to move a string of bytes into specific areas of memory. Nowadays things are much more complicated. The human language has to be determined. A font has to be found. The size and weight of the font must be calculated the resulting image must be transferred to the graphic display. and so on and so forth. The computer of today has mush more to do that the computer of old. Fortunately the computer of today is powerful enough to do the extra work without to much problem. However, in the end, the net effect is zero. The amount of time the user waits for the computer is roughly the same. Back then I waited for the floppy disk to be read. Now I wait for the Internet to respond. Back then I hit page down and the next screen showed up in the blink of an eye.Now I hit page down and I can see the computer redrawing all the graphical elements on the screen.

But wait a minute you are comparing apples with oranges. Am I Really? Of course if the computers of today were restricted to the tasks that the computers performed twenty years ago they are faster almost beyond comparison. But that’s not my point. The real answer to my assertion lies in the psychology of mankind. Or to put another way “how much slowness can the average user tolerate?”. When a computer is “doing its thing” at what point does the user throw up his hands and say “I’ve had enough”. I assert that this measure of human patience has not changed in the last twenty years. When the user hits the return key or clicks the mouse the internal human clock starts ticking at a certain point the human gets restless. This marks the boundary of acceptable computer speeds. If the computer finishes “its thing” before hand everything is OK. If it requires more time than it is slow and people will use it reluctantly.

Software engineers intuitively know where this boundary lies. After all software engineers are also human [ at least most of them ;) ]. The development of hardware and software over the last twenty years follows a general pattern. As hardware speeds increase, software engineers recognize that they can now do more things before the user gets frustrated. So they pack more processing into the software. At some point they do to much and the computers are once again to slow. So the hardware people make a faster chip or enable more memory or make a faster hard drive etc. This increases the speeds of the computers once again. And once again the software people get greedy and pack more processing into the software. The whole process keeps repeating itself.

The speed of computers is not measured in mhz or ghz but in minutes and seconds. It is measured in the level of human patience. How much time is the average user willing to wait for the computer to do its average thing. What ever this value may be is not important. The point is that this value has not changed in the last twenty years. And because this value has not changed the speed of computers to perform an average task has also remained stable.

Share