Saturday, 29 June 2013

PRISM, Meta Data & Minority Report: Why you should be conerned

As the privacy debate continues to rage about PRISM, assurances are surfacing defending the US government agencies PRISM approach, namely the covert monitoring of all internet traffic. The arguments put forward are that "we need to have PRISM to combat terrorism", "you have no need to worry if your not a terrorist" and "don't worry its only meta data we keep".  How does PRISM combat terrorism? What is meta data? Should we really be concerned if we are not terrorists?


The definition of meta data is, information about information, still not clear?  Let me explain with an example.  Take a phone call, the meta data is not the actual recording of the call, but is the information about the phone call, so who the call was made to, the length of the call, the date, time of day, and keywords spoken on the call (via voice recognition). This is an example of the meta data most likely kept.

In an email monitoring context, the meta data is the recipients of the email, date/time it was sent, approximate location (via ip address) and whether a defined selection of keywords are present within the text of the email and its attachments. This is the information the security services want to keep hold of on mass. Volumes of such meta data can then be automatically processed (data mined) to build a profile against an individual, or even groups of individuals.  It is the mining of this information which provides the desired result for the secret services, in identifying potential terrorists, that is their argument, and who is to say it doesn't work.   So if you were a potential terrorist plotting a plan, got involved in discussing bombs and the other typical terrorism keywords within your emails "too much", it would pass a threshold and your account/identity would be flagged up for closer scrutiny by the secret security services.

The same would be true with the analysis of internet websites you visited, visit too many terror related websites and you can expect to be flagged up. In fact I wouldn't be surprised if they married the email and web traffic meta data up.

In essence this is just like the movie Minority Report with Tom Cruise, but this is a reality, mining big data sets is used to predict future human behaviour events, in this scenario it is to stop terrorists before they commit an attack.


This type of data mining to predict human behaviour is nothing new, Facebook, Google, Microsoft and Amazon all use similar techniques to direct advertisements at you.  Another example is the Tesco supermarket chain, in the last 15 years Tesco has been extremely successful in growing their business. This success has been party due to Tesco mining their customer's shopping habits, with Tesco gathering information from their customer's Tesco ClubCards over the years. You could even argue Tesco are just as secretive as the US government security services with their data mining of big data.


Whether this type of monitoring of information is right or not, all depends on what side of the privacy fence you sit. But the Minority Report style predictive human behaviour presents a new and interesting privacy angle on big data mining. Especially when used to predict criminal behaviour as in the movie Minority Report, too far fetched I hear you cry, yet LAPD are piloting such as a system with great success. Predictive Policing: The Future of Law Enforcement? Where it will lead is the question...

Saturday, 15 June 2013

Man City Hack: When Information is worth more than money

The Manchester City scouting database hack is close to my heart on two counts, it highlights the corporate espionage side of information security, and involves my other passion away from security, the beautiful game, football.


Funny but it's no laughing matter for MCFC

City Scouting Database Compromise is Clouded
What is clear is Manchester City officials believe their confidential scouting database, has been taken a rival club employee, but how this data was compromised is cloudy.  The City scouting data was stored in a cloud based (online) application called ProScout7. Scout7, a Midlands based company, were quick to deny their system had been hacked, and suggested the fault lied with City's scouts password management. In other words that either a City scout had not protected their username and password, or perhaps the PC the scout was using to access Scout7, had been compromised with a keylogger or trojan software, passing on the Scout7 account credentials to a rival scout. The released details on the cause are sketchy, and it is quite possible the ProScout7 system was hacked, but we can only speculate about the cause at this point. But one thing is for certain, the scouting information is very important to Manchester City football club, and it  is of value to their footballing competitors. 
When Information is worth more than money
City's scouting knowledge has a direct cash value, in that rival teams may be alerted and bid for the same players City are interested in, pushing up the transfer price. This easily could result in a transfer increase in the millions.  But there is another value, which is more than the transfer fee, it is that City want to beat rivals like Manchester United, Chelsea, and other European big spenders, in signing the best available players. Signing of players ahead of rivals, can make all the difference, and can decide the winners of titles.  If Robin Van Persie was signed by City instead of United last season, I am sure most footballing pundits would agree City would of won the title.

Case in point, as soon as City found out their database was compromised by a rival club,  they immediately took action, and signed two of their secret targets, Jesus Navas (£24m) and Fernandinho (£30m), before their rivals could muscle in.


 £24Million Navas

£30 Million Ferdandinho

In all, this is an interesting incident, as it highlights the real high steaks value of information, and the reality of corporate espionage in the UK. The incident also poses the usual set of security questions, starting with, when information is known to be a high value business asset, is the business really doing enough to protect that asset?   For example
  • Are the scouts using the scouting system adequately managed?
  • Are the scouts regularly receiving information security awareness training?
  • Does the Scouting application sufficiently protect the scouting database? Especially with access control, ensuring scouts only have access to information on a need to know basis.
  • Are the computers used by the scouts appropriately secured? i.e. Anti-Virus, Patch Management, and other end point security technologies
  • Is the third party scouting company adequately vetted and managed by City?
Even if the ProScout7 online application was found to be at fault, Manchester City are still responsible for ensuring Scout7, a third party company City entrust with their holy of holies data, are able to protect their scouting information in line with their valuation of it.


Friday, 14 June 2013

PRISM: How I would set up covertly monitor of a Country's Internet Traffic

If I worked for a government intelligence agency, and I was tasked to devise a way to monitor the public Internet traffic data covertly, I would target the source of the Internet connectivity provision. The source of the internet connectivity resides within the telecommunications operators (telcos) e.g BT, Virgin Media. AT&T. Many telcos double as ISPs, but its the telcos who ultimately provide access to the Internet to ISPs. An advantage in monitoring at the source is I don't need to tell or ask the permission to do so from a series of private companies, like Google, Facebook, Apple and Microsoft, as I can simply intercept and record all of the public's sent and received internet network traffic on route to these private companies.


Typically teleco companies provide fast Internet connectivity to their clients (ISPs) over fibre optic cables. If I were to split the light signals sent over these fibre optics cables, I could allow traffic to continue on its merry way completely uninterrupted, while at the same time copy the signal light down another cable, sending the signals to my a secret data centre, where I would simply copy the traffic data, put the data together and then analyse it. Could this splitting of fibre optic light communications be the origin of the name PRISM?

For a government it would be fairly simple to have your teleco operators sign up using secrecy laws, indeed many telcos in the West were originally operated by governments, and continue to be licensed by their government, therefore remain easy to leverage. This approach means the likes of Google, Microsoft, Apple and Facebook would never officially have to be asked and therefore officially know about the monitoring, hence their official denials about PRISM.

The only surprise I have with the PRISM media storm, is that people were actually surprised that this type of monitoring is conducted by their elected governments.  I am not a privacy nut, but its fairly obvious that most governments in the world monitor their citizens online usage.  The lure of big data monitoring of citizens was always going to be too good for government secret services to resist doing.

Wednesday, 12 June 2013

New OWASP Top Ten 2013 released, actually its gone to a Top 11

Today, OWASP officially released their updated list of the Top 10 Web Application (website) risks.

The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organisations to develop, purchase, and maintain applications that can be trusted. The Top 10 list identifies some of the most critical risks facing organisations in web application security, and is a trusted resource and is often referred as the best practice to adhere to in application security within the information security industry.

OWASP update their Top 10 list every three years, this, the latest OWASP Top 10 list was released today on 12 June 2013.

OWASP Top 10 2013
A1 Injection
A2 Broken Authentication and Session Management
A3 Cross-Site Scripting (XSS)
A4 Insecure Direct Object References
A5 Security Misconfiguration
A6 Sensitive Data Exposure
A7 Missing Function Level Access Control
A8 Cross-Site Request Forgery (CSRF)
A9 Using Known Vulnerable Components
A10 Unvalidated Redirects and Forwards

What's Changed?  A Top 11?
In comparison to the last 2010 release, it's actually a Top 11, as added to the list is "A9 Using Known Vulnerable Components", which highlights the risk with developers using third party plugins, which poses a risk of having or introducing vulnerabilities if unvetted, and may even act as malicious trojans, introducing covert data theft and backdoors. This is a risk often associated with website Contain Management Systems (CMS) like Joomla and Drupal, where active communities freely provide thousands of third party modules which developers can snap into their websites, even though most modern CMS systems do a decent job in protecting themselves from such third party modules, they still present a risk which needs to be addressed by developers.

To accommodate this addition, the previously 2010 Top 10 list "A7 Insecure Cryptographic Storage" and "A9 Insufficient Transport Layer Protection" entries, have been merged into a single "A6 Sensitive Data Exposure" entry. So technically speaking nothing has been removed from the list, and there is one addition, hence the Top 11 comment.

Finally the Top 10 list is just that, the 10 most prominent application security risks, other risk are available, see the OWASP website for further details.