Uncategorized « IT Security Geeks

The dangers of pre-installing Java on corporate laptops

For the last couple of weeks all the IT security news headlines have revolved around the OpenSSL Heartbleed vulnerability (CVE-2014-0160), so we’ve decided to write about something completely different to keep things interesting.

For years now, we have been advising clients that it is good practice to have secure laptop, desktop and server builds across their estates. This allows for a baseline of each Operating System that gets deployed and also means that your network and system administrators have a high level of control as to what software is installed across the estate. This does mean that more time will need to be spent upfront making sure that these Operating System builds are both functional and secure for the end users, but the rewards for doing this considerably decrease the overall risk to your organization. When using an Active Directory domain, it is also highly recommended to lock down the Operating System builds even further by using Group Policy Objects (GPO’s.)

Our team were recently asked to conduct a penetration test for one of our clients, with the scope being a web based problem ticketing system. We were issued a corporate laptop to allow us to familiarize ourselves with the application, to read internal documentation and the relevant processes and procedures around it. This was to allow the team to perform a complete end-to-end security review for the client. The laptop was also configured with VPN access, allowing the team to work remotely from our offices.

The laptop was running Windows 7 and had been locked down, preventing us from installing any standard programs onto it (such as various penetration testing tools.) The laptop also did not come with Local Administrative rights configured (thankfully,) and we were only given standard user privileges. To add to the layers of protection around the client’s internal infrastructure, all connections to the Internet had to go through the internal corporate proxy server.

One thing that our team discovered is that the corporate laptop included Java within the build. The internal corporate proxy server was configured to block downloads of Windows executable files and installers (.exe’s and .msi’s,) however, downloading of Java applications (.jar files) was allowed. Thanks to this, the team were able to run a web application proxy on this laptop. This in turn lead to our team being able to perform the full penetration test of the application, all from the client’s corporate laptop.

During the penetration test, the team discovered that the application was vulnerable to numerous high-risk web application vulnerabilities, however, this really is the least of our client’s worries at this point. We never increased the scope of the penetration test to perform this testing and never violated the client’s security policy either, as we didn’t install anything on the laptop, we just downloaded and ran software. Technically, Java ran the application, we just downloaded it.

There are no security controls in place to stop internal users from performing the same steps that we did to run BurpSuite, or any other Java applications on these corporate builds. This poses a huge risk to this client, their internal applications and ultimately, their overall reputation. Yes, there is monitoring software and anti-virus software installed on these PC’s and attacks could be traced back to IP addresses and user names, however, host monitoring logs would only be reviewed after the attack and would be a responsive counter measure, not an active protection measure.

Our team were tasked with performing a penetration test of just one of the internal corporate web applications, but just like other large corporate clients, there are a lot more internal web applications in use within this organization. This means that an internal attacker could easily target any one of the client’s numerous other web applications, all from their own corporate PC. When you have over 100 internal users, this exponentially increases the risk to all internal web applications, and to the organization as a whole.

This vulnerability would also allow internal users to download these tools and to attack other external applications not within the client’s infrastructure or realm of control. If an internal user were to compromise (or attempt to compromise) a web application hosted by the client’s competition, this attack would be seen as coming from the client’s network. This, in turn, could lead to all sorts of legalities and difficult conversations.

We highly recommend that all corporate Operating System builds be penetration tested before being deployed so that vulnerabilities such as this can be mitigated before the builds are issued to internal users. At ITSG we take the security of our clients’ networks very seriously and this is why we always aim to point out as many vulnerabilities as possible during all our penetration testing engagements.

If you currently have a corporate Operating System build that hasn’t been through a thorough penetration test, then please consider doing this as a matter of urgency. ITSG will gladly assist you with developing secure Operating System builds or performing penetration testing of these builds if you already have them in place. Please contact us for more information.

Follow us on Twitter and/or Facebook to see more updates.

When you’re your own worst enemy.

Loss of data and system downtime is a problem. For most people reading this, this is a self evident truth. Though what I find “on-the-ground” is a wholly different and unsavoury matter altogether.

I was recently socialising off-the-clock with a few folks in the telecommunications industry and a story was relayed to me. A story that prompted me to clock in as a security professional to offer advice and guidance, as I could not in good conscience let it pass without at least pointing a few things out.

During the course of the week starting 19/03/12, an operating company of a very large mobile operator experienced a system outage.

The system outage impacted the operators soft revenue generation mechanism (Call completion to terminating subscriber when subscriber is IMSI detached) for a significant portion of that region’s subscriber base. It happens. Hardware does not last forever. In particular, hard drives have a limited lifespan. How you prevent widespread impact of drive losses is well known. Effective storage system design, a hardline approach to backups as well as competent intervention teams (vendor and operator) and system re-engineering as a result of effective problem management.

By time I had heard about the problem, it was still ongoing and had been for days and had been attributed to a “Known Error” which had occurred many times in the past on this vendor’s product line.

The vendor field intervention team reacted to the client report and reverted back to their global support center. A JBOD had failed.

Recommended intervention which would not have resolved the outage, cemented the problem by corrupting the root filesystem. This further aggravated the outage by introducing further system failures. This “Known Error” had clearly not been effectively documented in the vendor’s knowledge base, nor had it been addressed with effective problem management despite it’s prior occurrence globally. Nor was the senior support person assisting the incident response team sufficiently qualified to deliver support.

The last backup performed was 2 years prior, and when it was restored, did not work – I won’t go into why, but I will say the 2010 backup was not sane and therefore useless and essentially not a backup.

Not only was there no backup policy in place for a production system. Which in itself is a violation of the operator’s mandate for systems under control of Vendor Managed Services, but for years the field teams did not follow internal Field Change procedures laid out i.e. Backups before and after implementation of Field Change Orders – several Change Orders had been implemented in the past two years.

A sad state of affairs. Fortunately, this problem could have been resolved with a few more hours of work and a sound understanding of Unix. On a system that is built around 4 Unix boxes, a number of Linux machines, and a few instances of a Real Time OS’, one would expect incident response teams to be knowledgable or have the means to solve these problems within the team. They could not; and subsequently a senior vendor resource had to be flown in to the region unnecessarily, at great expense (long haul flight, hotel, per diem) to perform the rebuild. A rebuild that could have been procured on a day rate from skilled resources in region at a fraction of the overall cost and with a quicker turn-around time.

So, we have a loss of operator revenue, data, reputation and a severe impact on vendor operational expenditure and reputation.

All of this is simply a compounded effect of:

  • Poor procedure
  • Lack of stakeholder oversight
  • Poor Service Level Agreement management
  • Poor risk analysis and impact assessments associated with Managed Service contracts
  • Low cost and arguably inadequate resources tied to Managed Services
  • Absence of training and development of aforementioned resources
  • Lack of regular, independent assessment and testing of Managed Services capabilities
  • Lack of regular, independent assessment and testing of Vendor Support mechanisms

The requirement of a well defined penetration testing program is more than just testing your estate from Cyber Attack. It’s about identifying all vulnerabilities in your operation, be they physical, technical or human in nature. Penetration testing needs to be a full time consultancy covering all aspects of your business, no matter what.



Ever changing faces of vulnerabilities.

McAfee’s latest proof-of-concept showed the ability to seriously injure someone with a direct cyber attack. This involved an attack on an insulin implant pump. The pump could be induced to fully discharge it’s contents. In a diabetic, this will cause hypoglycæmia. This may result in death or brain damage if prolonged.

We’d like to congratulate Barnaby Jack for bringing attention to this.

We at I.T. Security geeks have long held the belief that this sort of thing is possible. While our focus has largely been directed at conventional, common place vulnerabilities we spend a fair amount of time working with obscure appliances and identifying issues with those products.

In my own area of influence, I’ve exploited rectifiers in controlled circumstances to effect power failures and fires (Easier than Jack’s Medtronic exploit, as you really don’t need to have much knowledge and most rectifiers are deployed with factory default PIN codes for administrator access). Don’t believe me? One of the world’s foremost suppliers of DC power systems has a default password for the user: “Admin” that is: “1”

Yes, the number 1.

I have personally accessed production power systems in operators where this default was discovered, and upon making my recommendations for immediate action was told, “No. We keep it like that to make it easier for our technicians.” or “We don’t perceive a problem with that.”

No perceived problem? How so? The manual for this particular device tells me:

The User has full access to all menus; including update the OS application and modifying, adding, and deleting Users.

Via the WEB Interface, a User (with proper access level) can:

View real-time operating information (rectifiers, converters, AC, DC, Batteries, etc.).

View and download information recorded in logs.

Send control commands.

Set programmable parameters.

Download and upload configuration files.

Download firmware to the Controller.


Curious, is it not?


I truly guarantee that if you are a telecommunications operator or co-location provider, you have this vendor’s product somewhere in your network. Call us to chat.


IT Security Geeks Partners with Iron Key

We are proud to announce that IT Security Geeks has partnered with IronKey, the leader in secure USB device drives, so we will be selling these devices, and also doing some really exciting things with them.

Website updates will be coming soon, with all the info.

Filed under: Uncategorized


IT Security Geeks would like to congratulate Neil Fryer for discovering a stack overflow vulnerability in Apple’s OS X CFNetwork.

The below is taken from the Apple Security update site:


CVE-ID: CVE-2010-1752
Available for: Mac OS X v10.5.8, Mac OS X Server v10.5.8, Mac OS X v10.6 through
v10.6.4, Mac OS X Server v10.6 through v10.6.4
Impact: Visiting a maliciously crafted website may lead to an unexpected application
termination or arbitrary code execution
Description: A stack overflow exists in CFNetwork’s URL handling code. Visiting a
maliciously crafted website may lead to an unexpected application termination or arbitrary code
execution. This issue is addressed through improved bounds checking. Credit to Laurent
OUDOT of TEHTRI-Security, and Neil Fryer of IT Security Geeks for reporting this issue.

Filed under: Uncategorized

Exciting times ahead!

IT Security Geeks is officially off the ground now, and the next couple of months are going to bring some really exciting times and changes to the website.

We are currently looking at partnering with quite a few security related organizations in the next couple of months, and we’re also looking into doing some really interesting training in the UK, and hopefully expanding this to cover Europe as well.

Please keep an eye on the website, as it is still undergoing development and it will be for a few months yet, as we have a lot of things to add.

Filed under: Uncategorized