Today there have been attention grabbing headlines in a number of news outlets. One of these headlines was “WhatsApp and iMessage could be banned under new surveillance plans”, from the Independent. The article outlined the possibility that technologies and applications, such as WhatsApp, would be banned as they allow users to send messages which are encrypted end-to-end. This falls in line with the new legislation that was rushed through during 2014, and the continuing loss of privacy that we have online.

One quote the article put heavy emphasis on, and in turn has been taken by several other news outlets was as follows:

In our country, do we want to allow a means of communication between people which[…]we cannot read?

My initial urge was to get angry at how patently wrong the connection of encryption and privacy to terrorism and violence was. But then I decided to listen to the full comment from Cameron, rather than the paraphrased version. The full quote is as follows:

In our country, do we want to allow a means of communication between people which, even in extremis with a signed warrant from the home secretary personally, we cannot read?

It’s not much better, but it’s also not as bad as the original quote sounds. The issue is, I can’t say that I want terrorists to be able to plot to carry out these attacks on innocent people, but I don’t believe this is the way of doing it. The fundamental link between not having access to the content of every single communication made anywhere in the UK, and terrorism “winning”, is the key issue. It’s simply a complete fallacy and by allowing the PM to say that unopposed would be us accepting it as truth and allowing the rate of erosion to our online privacy to increase greatly.

Heavy Handed

Taking everyone’s ability to access a completely private form of communication is a heavy handed tactic which, as I’ve said before regarding government views and ideas on online freedom and privacy, won’t actually work. It is not possible to stop anyone from encrypting communications that they send. It may be possible to stop a company from profiting from offering this type of service, thereby taking it away from the common user, but it is not possible to stop people from doing that.

The types of people that really want communication which is envrypted end-to-end will be able to access it regardless of the law. Included in that user base are those that want to discuss illegal activities. It’s not difficult to find how to set up a method of encryption such as PGP, and the active online community will no doubt offer a great deal of help to anyone that’s stuck.

The Glowing Record of Piracy Laws

Further, piracy laws are always a hot topic and probably a good example to learn from. They’re now failing so excessively that the list of “Most pirated shows of the year” is now reported and celebrated. This year Game of Thrones hit the top of the list for a third year in a row after being illegally downloaded at least 8.1 million times. Guess who lost out and weren’t able to enjoy their favourite TV show with everyone else – paying customers in both the UK and the US. Now guess who were able to enjoy it ad-free, only minutes after it finished its first airing in the US – those pirating the episode from around the world.

In the same way, a law stopping completely encrypted, backdoor free communication would simply make the majority of online users more vulnerable to having their personal communications leaked to the public. 2013 and 2014 have been years where, more than ever, it’s clear that we don’t need to increase the likelihood of it happening.

Back to Work

To wrap up my rambling (and procrastination), I will simply conclude that, while I know that giving up our privacy isn’t the right way to help authorities deal with terrorism, I’m not entirely sure what is. I’d imagine that whatever solution is the best will involve far more general knowledge of technology and computer security in UK government. The hackers and cyber criminals of the world are using social engineering, vulnerabilities in code and brute force attacks to get what they want, and it’s working. Maybe trying something that works as well as the criminals’ methods, would be a good place to start.

Lessons learnt today: Double quotes, redirection and rubbish routers

I haven’t posted here in a long while, and no doubt that will continue unfortunately. However, tonight I’ve learnt (or in some cases re-learnt) a few, albeit simple, lessons and it felt sensible to note it down so I can remember in the future.

Double Quotes vs. Single Quotes in PHP

This was a massive rookie error and a sign that I haven’t worked in PHP much over the past year.

While tightening my database security, I ended up patching up some dodgy db related code in PHP in a particularly old project. I spent nearly half an hour trying to work out why my passworded database user was being denied access to the database.

After a bit of debugging, I noticed the password was being cut off in the middle, and after further debugging and tracing the string back, I noticed that my randomly generated, double quoted password string happened to have a ‘$’ character in it.

PHP (among other languages) tries to resolve variables within double quoted strings, meaning “abc123$efg456” is resolved to “abc123”, if the variable $efg456 doesn’t exist in your script. The solution was to simply exchange the double quotes for single quotes.

Lesson: If you’re working in a language which treats double and single quoted strings differently, check you’re using the right ones!

.htaccess redirection

.htaccess always ends up leeching away my time. This time I was trying to set up some redirects to treat a sub-directory as the root directoy, but only if the file or directory didn’t exist in the root directory and did exist in the sub-directory.

This is simple enough if you know what the .htaccess variables mean, but in using examples and making assumptions I tripped myself up. So here’s the bit I learnt:

%{REQUEST_FILENAME} – This isn’t just the filename that was requested, but the absolute path from the root of the server.
%{REQUEST_URI} – This is the filename on its own.
%{DOCUMENT_ROOT} – This is usually the path up to the root directory of your site (though I’m quite sure this is not always the case).

So given the path “/a/file/path/to/a/website/index.html”:

%{REQUEST_FILENAME} = /a/file/path/to/a/website/index.html
%{REQUEST_URI} = index.html
%{DOCUMENT_ROOT} = /a/file/path/to/a/website

Simple when you know, but confusing otherwise! In any case, here’s the resulting rule I cobbled together:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{DOCUMENT_ROOT}/other%{REQUEST_URI} -f
RewriteRule ^(.*)$ /other/$1 [L,QSA]

That won’t suffice if you need directories to work as expected, and it will only apply to files, but it’s as much as I need for now.

Lesson: Don’t assume things, especially when dealing with something as powerful as .htaccess files. The more you know and use it, the less of a pain it will be.

NAT Loopback and remote IPs not working locally

Having acquired a new domain name today, I decided to put it to work as a domain for my home server (with help from no-ip). Having set it all up, I came across a peculiar scenario where I was able to access the machine remotely with the domain (the outward facing IP), I was able to access the machine locally with the local IP address, but I was unable to access the machine locally with the public IP or domain name.

In a few minutes I realised that this was not so peculiar at all. The Network Address Translation (NAT) rules decide where inbound requests should go when it hits the router, I have my router set up to allow certain connections to forward through to my server. However, these rules don’t apply to requests which pass through the router on the way out. I’d only be guessing, but I’d imagine this is because responses to requests across the Internet would otherwise have these rules applied to them as well, completely breaking the network.

To solve this issue, NAT loopback, a feature or further rule which resolves requests to the router’s public IP from inside the network correctly, is available in many routers. It is commonly turned off due to security concerns, or simply may not be available in some routers.

Unfortunately, my Huawei HG533 router falls into the latter group, with no obvious plans of an upgrade which would fix this.

Lesson: If you want to use one address to access a machine locally and remotely, ensure NAT Loopback is set up.

All simple stuff, but it’s been interesting learning about it all. Hopefully I can continue documenting the small things like this over the next year, my final year of university should be a steep learning curve!

HTML5: requestAnimationFrame made easy

I’ve been searching for a recent, clear article on how to start using requestAnimationFrame and it just doesn’t seem to exist.

requestAnimationFrame is an API for animating styling changes, canvas or WebGL. It removes the need to use a setInterval or setTimeout to continuously call an animation function and, along with it, optimises the animation both in visual and resource performance.

This sounds great, but the API is still experimental and so is liable to change, which it has done over the past year. This means a lot of the demos of the API online are outdated. Those that aren’t, make the API seem much harder to use than it actually is. This is what I’m hoping to remedy.

Continue reading HTML5: requestAnimationFrame made easy