Privacy.

Today there have been attention grabbing headlines in a number of news outlets. One of these headlines was “WhatsApp and iMessage could be banned under new surveillance plans”, from the Independent. The article outlined the possibility that technologies and applications, such as WhatsApp, would be banned as they allow users to send messages which are encrypted end-to-end. This falls in line with the new legislation that was rushed through during 2014, and the continuing loss of privacy that we have online.

One quote the article put heavy emphasis on, and in turn has been taken by several other news outlets was as follows:

In our country, do we want to allow a means of communication between people which[…]we cannot read?

My initial urge was to get angry at how patently wrong the connection of encryption and privacy to terrorism and violence was. But then I decided to listen to the full comment from Cameron, rather than the paraphrased version. The full quote is as follows:

In our country, do we want to allow a means of communication between people which, even in extremis with a signed warrant from the home secretary personally, we cannot read?

It’s not much better, but it’s also not as bad as the original quote sounds. The issue is, I can’t say that I want terrorists to be able to plot to carry out these attacks on innocent people, but I don’t believe this is the way of doing it. The fundamental link between not having access to the content of every single communication made anywhere in the UK, and terrorism “winning”, is the key issue. It’s simply a complete fallacy and by allowing the PM to say that unopposed would be us accepting it as truth and allowing the rate of erosion to our online privacy to increase greatly.

Heavy Handed

Taking everyone’s ability to access a completely private form of communication is a heavy handed tactic which, as I’ve said before regarding government views and ideas on online freedom and privacy, won’t actually work. It is not possible to stop anyone from encrypting communications that they send. It may be possible to stop a company from profiting from offering this type of service, thereby taking it away from the common user, but it is not possible to stop people from doing that.

The types of people that really want communication which is envrypted end-to-end will be able to access it regardless of the law. Included in that user base are those that want to discuss illegal activities. It’s not difficult to find how to set up a method of encryption such as PGP, and the active online community will no doubt offer a great deal of help to anyone that’s stuck.

The Glowing Record of Piracy Laws

Further, piracy laws are always a hot topic and probably a good example to learn from. They’re now failing so excessively that the list of “Most pirated shows of the year” is now reported and celebrated. This year Game of Thrones hit the top of the list for a third year in a row after being illegally downloaded at least 8.1 million times. Guess who lost out and weren’t able to enjoy their favourite TV show with everyone else – paying customers in both the UK and the US. Now guess who were able to enjoy it ad-free, only minutes after it finished its first airing in the US – those pirating the episode from around the world.

In the same way, a law stopping completely encrypted, backdoor free communication would simply make the majority of online users more vulnerable to having their personal communications leaked to the public. 2013 and 2014 have been years where, more than ever, it’s clear that we don’t need to increase the likelihood of it happening.

Back to Work

To wrap up my rambling (and procrastination), I will simply conclude that, while I know that giving up our privacy isn’t the right way to help authorities deal with terrorism, I’m not entirely sure what is. I’d imagine that whatever solution is the best will involve far more general knowledge of technology and computer security in UK government. The hackers and cyber criminals of the world are using social engineering, vulnerabilities in code and brute force attacks to get what they want, and it’s working. Maybe trying something that works as well as the criminals’ methods, would be a good place to start.

Lessons learnt today: Double quotes, redirection and rubbish routers

I haven’t posted here in a long while, and no doubt that will continue unfortunately. However, tonight I’ve learnt (or in some cases re-learnt) a few, albeit simple, lessons and it felt sensible to note it down so I can remember in the future.

Double Quotes vs. Single Quotes in PHP

This was a massive rookie error and a sign that I haven’t worked in PHP much over the past year.

While tightening my database security, I ended up patching up some dodgy db related code in PHP in a particularly old project. I spent nearly half an hour trying to work out why my passworded database user was being denied access to the database.

After a bit of debugging, I noticed the password was being cut off in the middle, and after further debugging and tracing the string back, I noticed that my randomly generated, double quoted password string happened to have a ‘$’ character in it.

PHP (among other languages) tries to resolve variables within double quoted strings, meaning “abc123$efg456” is resolved to “abc123”, if the variable $efg456 doesn’t exist in your script. The solution was to simply exchange the double quotes for single quotes.

Lesson: If you’re working in a language which treats double and single quoted strings differently, check you’re using the right ones!

.htaccess redirection

.htaccess always ends up leeching away my time. This time I was trying to set up some redirects to treat a sub-directory as the root directoy, but only if the file or directory didn’t exist in the root directory and did exist in the sub-directory.

This is simple enough if you know what the .htaccess variables mean, but in using examples and making assumptions I tripped myself up. So here’s the bit I learnt:

%{REQUEST_FILENAME} – This isn’t just the filename that was requested, but the absolute path from the root of the server.
%{REQUEST_URI} – This is the filename on its own.
%{DOCUMENT_ROOT} – This is usually the path up to the root directory of your site (though I’m quite sure this is not always the case).

So given the path “/a/file/path/to/a/website/index.html”:

%{REQUEST_FILENAME} = /a/file/path/to/a/website/index.html
%{REQUEST_URI} = index.html
%{DOCUMENT_ROOT} = /a/file/path/to/a/website

Simple when you know, but confusing otherwise! In any case, here’s the resulting rule I cobbled together:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{DOCUMENT_ROOT}/other%{REQUEST_URI} -f
RewriteRule ^(.*)$ /other/$1 [L,QSA]

That won’t suffice if you need directories to work as expected, and it will only apply to files, but it’s as much as I need for now.

Lesson: Don’t assume things, especially when dealing with something as powerful as .htaccess files. The more you know and use it, the less of a pain it will be.

NAT Loopback and remote IPs not working locally

Having acquired a new domain name today, I decided to put it to work as a domain for my home server (with help from no-ip). Having set it all up, I came across a peculiar scenario where I was able to access the machine remotely with the domain (the outward facing IP), I was able to access the machine locally with the local IP address, but I was unable to access the machine locally with the public IP or domain name.

In a few minutes I realised that this was not so peculiar at all. The Network Address Translation (NAT) rules decide where inbound requests should go when it hits the router, I have my router set up to allow certain connections to forward through to my server. However, these rules don’t apply to requests which pass through the router on the way out. I’d only be guessing, but I’d imagine this is because responses to requests across the Internet would otherwise have these rules applied to them as well, completely breaking the network.

To solve this issue, NAT loopback, a feature or further rule which resolves requests to the router’s public IP from inside the network correctly, is available in many routers. It is commonly turned off due to security concerns, or simply may not be available in some routers.

Unfortunately, my Huawei HG533 router falls into the latter group, with no obvious plans of an upgrade which would fix this.

Lesson: If you want to use one address to access a machine locally and remotely, ensure NAT Loopback is set up.

All simple stuff, but it’s been interesting learning about it all. Hopefully I can continue documenting the small things like this over the next year, my final year of university should be a steep learning curve!

A “filtered” Internet

The recent news has featured two main stories, both of which have already been discussed to death. There was the futile efforts at keeping the royal baby news going while Kate had the baby, recovered and walked outside with him in her arms and there was the huge trumpeting of Internet safety from David Cameron. The latter, which the Prime Minister set out as a way of stopping child abuse and guarding children from viewing adult content in one fell swoop, irritated me quite a lot. There were a number of reasons for my irritation…

TL;DR

  • Blacklisting search terms doesn’t stop any target audience from finding the adult or illegal material.
  • Parental controls at the ISP level are good, but the way they’re implemented should be regulated and the way they’re used should be determined by the account holder.
  • Parents don’t want their children to see this damaging content, so tell them how to guard against it.
  • Mainstream media is just as damaging as hard to find, extreme material online.
  • Aiming the blame at tech companies is lazy, and doesn’t attack the root cause.

Paedophiles don’t Google “child abuse”

Mr Cameron also called for some “horrific” internet search terms to be “blacklisted”, meaning they would automatically bring up no results on websites such as Google or Bing.

BBC Technology News

This calling for search companies to provide a blacklist is actually popular with a lot of people, with the idea that “if you can’t Google it, it won’t be there”. The problems with this method are pretty easy to see, while its effect is pretty minimal:

  1. Those who actually want to find the horrific parts of the Internet certainly won’t be using Google in the first place, and even if they were, they wouldn’t be using any of the terms on that blacklist.
  2. Kids aren’t just going to accidentally stumble upon the keywords and innocently type them in, it’s not something that grabs their curiosity.
  3. Google and the other search engines already have filtering and guarding against sites with adult themes.
  4. Because neither of the target audiences will be affected by the change, the only loss is to researchers or those with genuinely innocent searches which somehow fall into the blacklist.
  5. Finally, blacklists are easy to get around, mis-spelling words, using acronyms or replacements for words, sites would simply target keywords which simply couldn’t be blacklisted.

The “accidentally stumbling” scenario which is often painted out is also well guarded against nowadays, with further, definite steps being required to access content which is deemed inappropriate for some audiences. That is already provided for, many companies do in fact have whole departments devoted to ridding their services of this type of content, or at least flagging it as such to the user.

ISPs are not your babysitter

He told the BBC he expected a “row” with service providers who, he said in his speech, were “not doing enough to take responsibility” despite having a “moral duty” to do so.

– BBC Technology News

The blame was also passed on to ISPs because they weren’t doing enough, with Mr Cameron insisting that they filter the Internet service which they provide. This is where many who care about freedom and worry about censorship begin to get panicky because:

  1. What organisation is maintaining the filter, suggesting what is adult content and what is not? Is it down to the ISPs? Is it a government enforced thing? My bet is that it won’t be a free, open body that decides these things, unlike the rest of the Internet.
  2. There is a worry that these filters will be leaned on too much, rather than parents learning about the service they have, and how to control it so that it is child-friendly in their opinion. Why is it the ISPs job? You won’t find Freeview users being asked whether they’d like adult content on their TVs, there are instead tools which are set up by each end-user which filters the service at their end?
  3. For many, including myself, there is a sense that, despite not viewing any of the adult content which would be blocked, the restriction on my service is unwelcome and quickly feels like a policed, monitored service is being provided, instead of the fully-fledged, free Internet that we currently use.

On speaking to my family about this point, it was quickly mentioned that “It’s just like any other law, you wouldn’t complain about not being allowed to drink and drive”. Immediately this argument is flawed, because I still have the ability of having a few too many beers and walking out to my car and driving it, there isn’t a breathalyzer test which I must take before driving, or a guard which won’t allow me to get behind the wheel when I’m over the limit – I still have the ability to commit the crime.

To make this point worse, after the announcement from the Prime Minister, many ISPs came out and described the parental tools which are already supplied by their service, and alluded to the fact that David Cameron’s jubilance, was simply taking credit for work that had already been done.

For the record, I’m actually all for parental controls at the service level, it provides a service wide coverage for the filtering of websites, which means most children will be unable to get past the filters (as most mobile networks impose filters, as do all schools and colleges), especially if the filtering includes proxy servers and websites. However, it certainly doesn’t stop anyone who is over 18 from removing the filters and accessing illegal content online. The idea is to also provide the filters automatically, passing over the parents heads and assuming their computer illiteracy, which leads me on to…

Work with parents, not for them

Most parents are probably quite concerned at the thought of their child being exposed to adult content online. Many will actually ensure that a child’s Internet use is supervised at an early age, and any filtering which the parents should be applying to their Internet connection is not absent through choice. Technology companies and the government (as they’re so interested) should be working with parents, providing support, training and user-friendly tools to parents, rather than just applying a blanket filter to all their account holders and patronising them. Clearly this is happening to some extent already, but the level of involvement could certainly do with raising.

As much as many people protest, the parents do have a huge responsibility when they allow their children to use the Internet, just as they do in not allowing them to watch films with an age rating which is too old for them or watch TV channels which are specifically intended for adults. They should be provided with the necessary information to fulfil this responsibility, but otherwise it should be up to them.

Realise that mainstream media isn’t squeaky clean

The other point that has annoyed many people is the fact that much of the mainstream media now has strong sexual, violent or addiction promoting themes, which have been entirely overlooked in this review.

Examples aren’t possible to count, but obvious examples can be found in most tabloid newspapers, gossip magazines and adverts (that’s printed, televised or digital). These slightly softer images are just as damaging as those more extreme images on the Internet, because they are in publications or media channels which contain everyday information alongside them, normalising them and sending similarly bad messages to children. If the Internet is going to be filtered, the filters already in place in other forms of media should be reviewed as well.

The tech companies don’t produce the illegal content

The final, and most concerning thing, is that the government seem to be aiming their efforts at the wrong end of the whole issue.

  1. The ISPs provide a service, plain and simple, they serve up whatever is available on the Internet – that’s what they’re meant to do.
  2. The search engines are tools which find relevant information on the Internet in relation to a selection of keywords entered by a user – that’s what they’re meant to do.

They do not produce illegal material, nor do they promote it. Asking them to do something about that material is already too far along the line, because by then the illegal act has occurred, been filmed or in some way documented, uploaded publically and then is accessible to anyone that knows where and how to get the content.

It’s like those weed killer adverts, the competitor product always just kills the weed at the surface, not at the root – the material can easily be hidden from view, but that doesn’t stop the root cause.

What really needs to be done is further investigation, more of a focus on finding those that cause harm to others for personal gain, unfortunately Mr Cameron’s budget cuts are actually doing the complete opposite – that’s what makes me quite so irritated.

 

The whole issue is huge, and calls into question the privacy of Internet users and the freedom of the Internet itself, as well as the degree to which the government are misguided when the word “technology” is mentioned. What David Cameron has suggested will detriment normal Internet users, bastardise the Internet and completely fail to achieve most of the aims which he has set out.</rant>

Stub: 2013 Week 11 & 12

A stub is a short article which rounds up little bits of information that I’ve found throughout the week. These may be web or computer related, or they may be more general things. It’s more a personal log than an actual article, reminding me of things that I may’ve forgotten, but some of it may be of help to someone else!

Looks like I’ve falled behind with my posts! The year is certainly starting to become busy! So I’d better catch up on the past two weeks!

This er…fortnight:

Stub: 2013 Week 10

A stub is a short article which rounds up little bits of information that I’ve found throughout the week. These may be web or computer related, or they may be more general things. It’s more a personal log than an actual article, reminding me of things that I may’ve forgotten, but some of it may be of help to someone else!

As is always the way, this week has been far busier than the week before it!

This week:

Stub: 2013 Week 9

A stub is a short article which rounds up little bits of information that I’ve found throughout the week. These may be web or computer related, or they may be more general things. It’s more a personal log than an actual article, reminding me of things that I may’ve forgotten, but some of it may be of help to someone else!

A slightly late Stub article for the week! March is here! It’s been a busy weekend, and a brighter weekend – Spring is on it’s way!

This week:

Stub: 2013 Week 8

A stub is a short article which rounds up little bits of information that I’ve found throughout the week. These may be web or computer related, or they may be more general things. It’s more a personal log than an actual article, reminding me of things that I may’ve forgotten, but some of it may be of help to someone else!

This week has been a particularly productive, but the fact that it’s the last week in February already is a little startling!

This week:

Stub: 2013 Week 7

A stub is a short article which rounds up little bits of information that I’ve found throughout the week. These may be web or computer related, or they may be more general things. It’s more a personal log than an actual article, reminding me of things that I may’ve forgotten, but some of it may be of help to someone else!

Not sure where the week has gone! Valentine’s has come and gone already!

This week:

Refactoring Code in to an Object-Oriented Paradigm #6: Compatibility

This is article number 6 of a series of article, which simply documents my refactoring of some code I originally wrote (pretty poorly) last year. To get the gist of what’s going on. Check the original post – Refactoring Code in to an Object-Oriented Paradigm.

Making Things Work for Everyone

We’re nearly finished with our refactor, we’ve added some great functionality, made things a lot cleaner and more efficient…that is for the browsers that can run our code. The problem is that, while cross-browser compatibility has got a lot easier, it’s still an issue. It’s only made worse by the legacy versions of IE which are still clinging on, and until the majority of IE users are on IE10, we’re still going to see some large differences or deficiencies for those users.

Because our code is quite simple, there aren’t a huge number of compatibility issues, but there are a few. These issues are also quite common when using Javascript. So we’ll sort them out, explaining whats and whys of each solution.

Event Handlers

I mentioned in the last article how I’d “continue side-stepping the compatibility issues which exist” regarding native event handling. The issue with native event handling is that there are two flavours that are definitely required and another that should be used to be safe. That doesn’t make for nice, quick code. But I wrote the code in the last article with extensibility in mind, so we can simply add compatibility for other browsers without affecting the rest of the code. Here’s our original code…

/** Set a native event listener (eventually regardless of the browser you're using).
*
*   @param eventElm The element to which the element to listen for, will involve.
*   @param eventType The type of event to listen for.
*   @param callback The function to call when the event happens.
*   @return True if listener was set successfully, false otherwise.
*/
AutoScrobbler.prototype.setNativeListener(eventElm, eventType, callback) {

eventElm.addEventListener(eventType, callback, false);

}

Conveniently, the MDN page for this has a bare-minimum shim documented, and details older methods of event handling as well. So we’ll use this as a guide on how to add compatibility to this function.

/** Remove a native event listener regardless of the browser you're using.
*
*   @param eventElm The element that had been listening.
*   @param eventType The type of event that was being listened for.
*   @param callback The function that was set to trigger.
*   @return Undefined if successful, otherwise an exception will be thrown.
*/
AutoScrobbler.prototype.setNativeListener(eventElm, eventType, callback) {

if (eventElm.addEventListener)

eventElm.addEventListener(eventType, callback, false);

else if (eventElm.attachEvent)

eventElm.attachEvent('on' + eventType, callback);

else {

eventElm['on' + eventType] = callback;
if (typeof eventElm['on' + eventType] == "undefined")

throw "EventHandler Error: The event could not be set";

}

}

This version includes far more browser support, and goes a step further in that it will even support the very old version of event handlers. However, it does currently assume that every single recognised event by addEventListener is simply an unprefixed version of the events recognised by both attachEvent and the older element.event style handler. This is certainly not true in some cases, for instance the “textInput” event isn’t at all support by attachEvent, the closest you can get to this event is “onpropertychange”. If we wanted this level of support, we would need to have an exceptions list, which would change these values based on what event handler was to be used. However, it works for simple ‘click’ events and ‘mouseover’ events. We can also add a complementing removeNativeListener function too.

/** Set a native event listener regardless of the browser you're using.
*
*   @param eventElm The element to which the element to listen for, will involve.
*   @param eventType The type of event to listen for.
*   @param callback The function to call when the event happens.
*   @return Undefined if successful, otherwise an exception will be thrown.
*/
AutoScrobbler.prototype.removeNativeListener(eventElm, eventType, callback) {

if (eventElm.removeEventListener)

eventElm.removeEventListener(eventType, callback, false);

else if (eventElm.detachEvent)

eventElm.detachEvent('on' + eventType, callback);

else

eventElm['on' + eventType] = undefined;
if (typeof eventElm['on' + eventType] != "undefined")

throw "EventHandler Error: The event could not be removed";

}

This should now provide us with the support we need for any event handling in the code.

Custom Event Handling

I originally implemented my own method of custom event handling in my article on making code extensible. I chose to do this due to the plethora of compatibility issues which come with custom event handling using native provisions. The most support for the new method was only fully implemented across browsers when IE9 came along. If we were to use it, it would mean that the code would only be available for use in “modern” browsers (so not IE8 or below). For the basic uses we want it for (namely making our code more extensible and future-proof), we don’t want to lose all that support.

There are libraries and shims out there which will make all event handling a one liner. Large libraries such as jQuery and MooTools were practically made for solving cross-browser issues like this, but you certainly wouldn’t want to use one just for this one piece of functionality, as it adds nearly 100kb to your download unnecessarily! There are very small libraries (known as micro libraries) which do just focus on one element of functionality. For example, Events.js provides a cross-browser events system which would solve the problem. The 140medley library actually solves a few problems* besides event handling, but it’s tiny (we’re talking bytes rather than kilobytes) so it may be worth using it anyway.

However, my code is an addition to someone elses code, through this process we’ve actually added to the code drastically (though this has added further functionality). We’ve even added to the number of file requests made by separating out the styles (we could probably sort this out with even more code). While I originally wanted to keep things lean I’ve expanded the code, adding another library on top of this would prove even more weighty! For this reason I am going to keep my custom event handler set up as it is, (though I’ve just added a remove and an initiate function to make it a full implementation), I will make this available on GitHub when I finish this series of articles so other people can use this basic implementation, and if you’d prefer to use someone elses code instead, Nicholas Zakas has a great implementation too.

querySelector

The event handling issues are definitely the worst parts of the code to deal with where compatibility is concerned. But a much more common compatibility issue is the use of element.querySelector(). This function is brilliant for targeting any element on the page using CSS-style selectors. Without it we are limited to functions such as element.getElementById(), which only allows the targeting of one element at a time. As above, jQuery and MooTools have had this selector functionality built in for years. But again, adding a near 100kb file to your load just for this functionality is overkill, I would instead go back to recommending 140medley* which will sort this functionality out for you in less than a kilobyte.

I won’t be using that library here because again, it would add more to our code, and I’d have to include a copyright license to legally use the code. Because the operations are simple, I’m instead going to fall back to the lowest common denominator. I know that all the browsers that I need compatibility in have functions such as element.getElementById(), element.getElementsByClassName() and element.getElementsByTagName(). So I will simply rewrite my code to use these.

...
AutoScrobbler.prototype.addLatest = function() {

var tracks = document.querySelectorAll(".userSong");
//Can instead be written as
var tracks = document.getElementsByClassName("userSong");

... tracks[1].querySelector("input").value ...;
//Can instead be written as
... tracks[1].getElementsByTagName("input")[0].value ...;

...

... item.querySelector("input").value ...
//Can instead be written as
... item.getElementsByTagName("input")[0].value ...

...
... tracks[0].querySelector("input").value;
//Can instead be written as
... tracks[0].getElementsByTagName("input")[0].value;

...

item.querySelector("input").checked = true;
//Can instead be written as
item.getElementsByTagName("input")[0].checked = true;

...

}

I was using querySelector here “because I could” in most cases, rather than “because it was essential”. I wasn’t targeting anything very specific. Even if I had a couple of selectors in each querySelector, it would’ve easy to simply break these down into two steps. I have made things far more compatible, simply by taking time to rethink my code a little. This is the best type of compatibility refactoring, as it requires no extra code, just rewritten code to get further browser support.

We’ve made a few changes here, so I’ll post the full code below, just for reference.

/** AutoScrobbler is a bookmarklet/plugin which extends the Universal Scrobbler
*   web application, allowing automatic scrobbling of frequently updating track
*   lists such as radio stations.
*
*   This is the constructor, injecting the user controls and starting the first
*   scrobble.
*/
function AutoScrobbler() {

var userControls = "<div id=\"autoScrobbler\" class="auto-scrob-cont">\n"+
"<input id=\"autoScrobblerStart\" class="start" type=\"button\" value=\"Start auto-scrobbling\" /> | <input id=\"autoScrobblerStop\" class="stop" type=\"button\" value=\"Stop auto-scrobbling\" />\n"+
"<p class="status-report"><span id=\"autoScrobblerScrobbleCount\">0</span> tracks scrobbled</p>\n"+
"</div>\n";
this.stylesUrl = "http://www.andrewhbridge.co.uk/bookmarklets/auto-scrobbler.css";
this.injectHTML(userControls, "#mainBody");
this.injectStyles();
this.startElm = document.getElementById("autoScrobblerStart");
this.stopElm = document.getElementById("autoScrobblerStop");
this.loopUID = -1;
this.lastTrackUID = undefined;
this.scrobbled = 0;
this.countReport = document.getElementById("autoScrobblerTracksScrobbled");
this.evtInit(["addLatest", "loadThenAdd", "start", "stop"]);
this.listen("addLatest", this.reportScrobble);
this.setNativeListener(this.startElm, 'click', this.start);
this.setNativeListener(this.stopElm, 'click', this.stop);
this.start();

}

/** Inject the css stylesheet into the <head> of the page.
*/
AutoScrobbler.prototype.injectStyles = function() {

var styles = document.createElement('SCRIPT');
styles.type = 'text/javascript';
styles.src = this.stylesUrl;
document.getElementsByTagName('head')[0].appendChild(styles);

}

/** Hashing function for event listener naming. Similar implementation to
*   Java’s hashCode function. Hash collisions are possible.
*
*   @param toHash The entity to hash (the function will attempt to convert
*                 any variable type to a string before hashing)
*   @return A number up to 11 digits long identifying the entity.
*/
AutoScrobbler.prototype.hasher = function(toHash) {

var hash = 0;
toHash = "" + toHash;
for (var i = 0; i < toHash.length; i++)

hash = ((hash << 5) - hash) + hash.charCodeAt(i);

}

/** Custom event initiator for events in AutoScrobbler.
*
*   @param eventName The name of the event. This may be an array of names.
*/
AutoScrobbler.prototype.evtInit = function(eventName) {

//Initialise the evtLstnrs object and the event register if it doesn't exist.
if (typeof this.evtLstnrs == "undefined")

this.evtLstnrs = {"_EVTLST_reg": {}};

if (typeof eventName == "object") {

for (var i = 0; i < eventName.length; i++) {

var event = eventName[i];
this.evtLstnrs[""+event] = [];

}

} else

this.evtLstnrs[""+eventName] = [];

}

/** Custom event listener for events in AutoScrobbler.
*
*   @param toWhat A string specifying which event to listen to.
*   @param fcn A function to call when the event happens.
*   @return A boolean value, true if the listener was successfully set. False
*           otherwise.
*/
AutoScrobbler.prototype.listen = function(toWhat, fcn) {

//Initialise the function register if not done already
if (typeof this.evtLstnrs._EVTLST_reg == "undefined")

this.evtLstnrs._EVTLST_reg = {};


if (this.evtLstnrs.hasOwnProperty(toWhat)) {

//Naming the function so we can remove it if required. Uses hasher.
var fcnName = this.hasher(fcn);

//Add the function to the list.
var event = this.evtLstnrs[toWhat];
event[event.length] = fcn;
this.evtLstnrs._EVTLST_reg[toWhat+"->"+fcnName] = event.length;
return true;

} else

return false;

}

/** Custom event listener trigger for events in AutoScrobbler
*
*   @param what Which event has happened.
*/
AutoScrobbler.prototype.trigger = function (what) {

if (this.evtLstnrs.hasOwnProperty(what)) {

var event = this.evtLstnrs[what];
for (var i = 0; i < event.length; i++)

event[i]();

}

}

/** Custom event listener removal for events in AutoScrobbler
*
*   @param toWhat A string to specify which event to stop listening to.
*   @param fcn The function which should no longer be called.
*   @return A boolean value, true if removal was successful, false otherwise.
*/
AutoScrobbler.prototype.unlisten = function(toWhat, fcn) {

var fcnName = this.hasher(fcn);
if (this.evtLstnrs._EVTLST_reg[toWhat+"->"+fcnName) {

var event = this.evtLstnrs[toWhat];
var fcnPos = this.evtLstnrs._EVTLST_reg[toWhat+"->"+fcnName];
event[fcnPos] = void(0);
delete this.evtLstnrs._EVTLST_reg[toWhat+"->"+fcnName];

return true;

}

return false;

}

/** Remove a native event listener regardless of the browser you're using.
*
*   @param eventElm The element that had been listening.
*   @param eventType The type of event that was being listened for.
*   @param callback The function that was set to trigger.
*   @return Undefined if successful, otherwise an exception will be thrown.
*/
AutoScrobbler.prototype.setNativeListener(eventElm, eventType, callback) {

if (eventElm.addEventListener)

eventElm.addEventListener(eventType, callback, false);

else if (eventElm.attachEvent)

eventElm.attachEvent('on' + eventType, callback);

else {

eventElm['on' + eventType] = callback;
if (typeof eventElm['on' + eventType] == "undefined")

throw "EventHandler Error: The event could not be set";

}

}

/** Set a native event listener regardless of the browser you're using.
*
*   @param eventElm The element to which the element to listen for, will involve.
*   @param eventType The type of event to listen for.
*   @param callback The function to call when the event happens.
*   @return Undefined if successful, otherwise an exception will be thrown.
*/
AutoScrobbler.prototype.removeNativeListener(eventElm, eventType, callback) {

if (eventElm.removeEventListener)

eventElm.removeEventListener(eventType, callback, false);

else if (eventElm.detachEvent)

eventElm.detachEvent('on' + eventType, callback);

else

eventElm['on' + eventType] = undefined;
if (typeof eventElm['on' + eventType] != "undefined")

throw "EventHandler Error: The event could not be removed";

}

/** A function which will inject a piece of HTML wrapped in a
*   <div> within any node on the page.
*
*   @param code The HTML code to inject.
*   @param where The node to inject it within.
*   @param extraParams An object which allows optional parameters
*   @param extraParams.outerDivId The id to be given to the wrapping <div>
*   @param extraParams.outerDivClass The class to be given to the wrapping <div>
*   @param extraParams.insertBeforeElm An element within the element given
*                                      in where, to insert the code before.
*/
AutoScrobbler.prototype.injectHTML(code, where, extraParams) {

if (typeof extraParams) {

if (extraParams.hasOwnProperty("outerDivId"))

var divId = extraParams.outerDivId;

if (extraParams.hasOwnProperty("outerDivClass"))

var divClass = extraParams.outerDivClass;

var insBefElm = (extraParams.hasOwnProperty("insertBeforeElm")) ? extraParams.insertBeforeElm : null;

}

var node = document.querySelector(where);
var elm = document.createElement('DIV');

if (divId)

elm.id = divId;

if (divClass)

elm.className = divClass

elm.innerHTML = code;
node.insertBefore(elm, insBefElm);

}

/** Starts the auto-scrobbler, scrobbles immediately and schedules an update
*   every 5 minutes.
*/
AutoScrobbler.prototype.start = function() {

this.loadThenAdd();
autoScrobbler.loopUID = setInterval(this.loadThenAdd, 300000);
autoScrobbler.start.disabled = true;
autoScrobbler.stop.disabled = false;

}

/** Stops the auto-scrobbler, ends the recurring update and zeros the required
*   variables.
*/
AutoScrobbler.prototype.stop = function() {

clearInterval(this.loopUID);
this.lastTrackUID = undefined;
this.loopUID = -1;
this.stop.disabled = true;
this.start.disabled = false;

}

/** Loads the new track list using Universal Scrobbler and schedules a scrobble
*   of the latest tracks 30 seconds afterwards.
*/
AutoScrobbler.prototype.loadThenAdd = function() {

doRadioSearch();
setTimeout(this.addLatest, 30000);

}

/** Selects all the tracks which have not been seen before and scrobbles them
*   using Universal Scrobbler.
*/
AutoScrobbler.prototype.addLatest = function() {

var tracks = document.getElementsByClassName(".userSong");
this.lastTrackUID = (typeof this.lastTrackUID == "undefined") ? tracks[1].getElementsByTagName("input")[0].value : this.lastTrackUID;

//Check every checkbox until the last seen track is recognised.
for (var i = 0; i < tracks.length; i++) {

var item = tracks[i];
if (item.getElementsByTagName("input")[0].value == this.lastTrackUID) {

i = tracks.length;
this.lastTrackUID = tracks[0].getElementsByTagName("input")[0].value;

} else {

item.getElementsByTagName("input")[0].checked = true;
this.scrobbled++;

}

}
doUserScrobble();
this.trigger("addLatest");

}

/** Updates the user interfaces to reflect new scrobbles.
*/
AutoScrobbler.prototype.reportScrobble = function() {

this.countReport.innerHTML = this.scrobbled;

}

// Create a new instance of the AutoScrobbler.
autoScrobbler = new AutoScrobbler();

This concludes my series of articles on refactoring code, what we’re left with is a much higher quality of code, which is cleaner, more efficient and easier to add to in the future. I’ll round this series off with a final “teardown” article to review and add some helpful tips which you may want to follow as you refactor your own code.

* It should be noted that, while 140medley does a brilliant job and will definitely solve these compatibility options, it won’t check to see if the feature is already natively in the browser. This means modern browsers will be using a workaround needlessly.

Stub: 2013 Week 6

A stub is a short article which rounds up little bits of information that I’ve found throughout the week. These may be web or computer related, or they may be more general things. It’s more a personal log than an actual article, reminding me of things that I may’ve forgotten, but some of it may be of help to someone else!

February rolls on (and my workload picks up!) another week of news has passed by!

This week: