tip off

Vodafone’s infosec balls-up a symptom of wider problems

Vodafone’s apparent information security breach, if it’s being described accurately, certainly suggests a botched approach. But corporate Australia’s blasé attitude to our personal identity information is as much to blame.

Fairfax’s Natalie O’Brien broke the story yesterday that anyone with a valid Vodafone dealer login could access every customer’s complete file — name, address, date of birth, driver’s licence number, credit card details, the PIN they use to operate their account and even the full history of their phone calls and text messages. It’s wrong to say this data was “publicly available on the internet”. You did need a valid login, after all. But the set-up seems deeply flawed, and valid logins are on the loose.

Vodafone retailers have said each store has a user name and password for the system,” O’Brien wrote. “That access is shared by staff and every three months it is changed.” That for a start fails a fundamental principle of infosec auditing. With a single login shared by everyone within a store, it’s impossible to track who accessed which data or who leaked their login details to the bad guys. And if someone gets sacked, they could still be able to access the system for up to three months.

Being able to view a customer’s PIN is just plain wrong. Passwords — and a PIN is just a password — should never be stored in their unencrypted form. Standard security practice is to store an encrypted version of the password. When someone needs to supply their password, the same encryption process is applied and, if it matches the stored version, access is granted. But you can’t turn that back into the password itself without deploying spook-agency-grade computing resources.

Any system that can reveal the password or PIN itself is broken.

Any system that exposes customer credit card information to a shared login is also broken. The Payment Card Industry Data Security Standard (PCI DSS) requires access to cardholder data to be restricted on a “business need-to-know” basis, to assign a unique ID to every person with computer access, and to track and monitor all access to network resources and cardholder data.

All this, if true, is sufficient reason to slap Vodafone, and slap them very hard indeed.

But another part of the problem is the insistence — not just by Vodafone, but by so many companies — of compiling databases of personal information that simply aren’t required to fulfill the business need.

The idea that companies need to photocopy or scan a driver’s licence before issuing a mobile phone SIM card, for example, is “absolute rot” and “outrageous”, according to Paul Ducklin, head of technology for the Asia-Pacific region for global information security firm Sophos.

I try to make a point that if people wish me to identify myself, for example checking into a hotel or dealing with a shop, and so I need photo ID, then I will hold up my driving licence for them to look at so that they can satisfy themselves that I’m who I say I am. They can write down my name if they wish,” Ducklin told the Patch Monday podcast. “Frequently they then just reach out and expect to take that licence and do something with it, and my response is always, ‘I’m sorry, you can look but you can’t touch’.”

The system only needs to record the fact that valid ID was sighted, not the ID itself.

Similarly, at least one ticket agency asks for and stores each customer’s date of birth. Why? Some performances have age restrictions, either because of their content or because they’re taking place on licensed premises. But this misses the point on two grounds.

One, you don’t need to record the date of birth, just the fact that the customer is of the right age. And two, as Ducklin points out, the age test is being conducted at the wrong point in the process. “It doesn’t matter what you say when you’re doing the booking, whether you’re over 18 or not determines whether you’re allowed to actually go in through the gates of the concert,” he said.

Vodafone is currently saying that they’re only aware of a single breach. One interesting question is whether that will turn out to be the case on their internal investigations are done. Another, in the absence of US-style laws requiring companies to disclose data breaches, is whether we’ll ever know.

23
  • 1
    ronin8317
    Posted Monday, 10 January 2011 at 1:48 pm | Permalink

    There is a mobile phone PIN which is used to reset the phone, so the operators need that for support calls. Account PINs are also accessible by the operator because you have to read the PIN out to them over the phone to confirm your identity.

    Because Vodafone don’t audit access, they are being entirely truthful by saying they’re only aware of one breach. I expect their IT department will get a lot of action in the next few weeks.

  • 2
    Posted Monday, 10 January 2011 at 1:58 pm | Permalink

    @ronin8317: My understanding is that the mobile phone’s PIN — actually the SIM’s PIN, I think — can be changed without the new version being known to the network operator, but that there’s a separate unlock code to break into that should the PIN be forgotten. Have I got that right?

    The account PIN does need to be read to the phone operator to confirm the customer’s identity, but the point is that the PIN itself does not need to be — and I say should not be — stored or visible. The customer should state their PIN, the operator should key it in, and then the system tells the operator whether the PIN was correct or not. Having the PIN visible means the operator can impersonate the customer at a later time.

    And yes indeed, Vodafone’s IT people will be extremely right now…

  • 3
    Grover Jones
    Posted Monday, 10 January 2011 at 2:25 pm | Permalink

    Having been through a PCIDSS audit in the exact same industry as Vodafone (i.e. a, now defunct, mobile provider) I am shocked, shocked I say, that Visa has authorised them to accept Visa cards, and Mastercard the same. The requirements for a business of that size are very strict, and, at least when we went though it about 6 years ago, Visa are not satisfied until everything is perfect.
    If the situation is as you have describe it, then surely they are in major breach of PCIDSS and should have the ability to accept CC payments revoked until they can pass an audit.
    Remind me never to work for Vodafone (one of my IT hats is in security) if their IT staff are allowing this sort of crap. I’m appalled, and will continue to ignore Vodafone as a potential supplier.

  • 4
    Douglas Gration
    Posted Monday, 10 January 2011 at 2:27 pm | Permalink

    Unfortunately Paul Ducklin’s comment that “The idea that companies need to photocopy or scan a driver’s licence before issuing a mobile phone SIM card, for example, is “absolute rot” and “outrageous”” isn’t correct.

    Consumers might normally expect that National Privacy Principle number 8 - which states that “Wherever it is lawful and practicable, individuals must have the option of not identifying themselves when entering transactions with an organisation” - would apply. But it doesn’t in the case of mobile phone companies. The Telecommunications (Service Provider — Identity Checks for Pre-paid Public Mobile Telecommunications Services) Determination 2000 (and its post-paid equivalent) prohibits a mobile carrier from activating a mobile service until the customer has provided all sorts of identifying information - presumably for the benefit of law enforcement and other government agencies.

    So, it’s completely fair to have a go at Vodafone for its lax security practices, but the blame for collection of the customer data in the first place lies with the government, not Vodafone.

  • 5
    Chris Ailwood
    Posted Monday, 10 January 2011 at 2:46 pm | Permalink

    I’m with Doug Gration on his view about Paul Ducklin’s comments. Ducklin is clearly in IT, not compliance.

    It is absolutely essential that when an identity is verified by the production of a document, such as a passport or driver’s licence, that a copy be made of the document and retained. This not only protects the staff member undertaking the identity verification from accusations of collusion with the customer/potentail fraudster but also provides the organisation with evidence that the identity verification was undertaken in the event that it has to defend against a negligence claim.

    There is no point in having a fraud protection requirement if you cannot assure compliance with the requirement.

  • 6
    Posted Monday, 10 January 2011 at 2:55 pm | Permalink

    Thanx Stilgherrian for reporting this. I did not believe the radio report I heard this morning that Vodafone staff members shared logins and passwords. How common is this?

    Perhaps this is a good subject for a Parliamentary inquiry - a highly visible but low stakes and moderate cost way of promulgating good IT security practices.

  • 7
    Posted Monday, 10 January 2011 at 3:14 pm | Permalink

    Thanks for the pointers to the relevant rules about telcos being required to do identity checks. But there’s a difference between verifying an identity and keeping a database of the source documents that were used to verify that identity. If indeed the government is requiring telcos to create such a massive database of documents that could be used for identity theft, then they’re part of the problem.

    I do understand the point that keeping a copy of the document protects staff against allegations that they didn’t do the check. But what’s the risk assessment here? And are there other ways of verifying that they’re doing their job, such as spot checks?

    My question would be whether creating these vast databases of identity documents is more of a risk that the supposed horror of someone getting a mobile phone under false name. And on a related note, how many staff and dealers could spot a fake driver’s license or passport anyway?

    The snippet of Ducklin I quoted here may not show his comments in the best light - the “joys” of editing quickly. You can here the full interview and therefore the full context in the Patch Monday podcast.

  • 8
    Posted Monday, 10 January 2011 at 3:16 pm | Permalink

    Um, you can “hear” it. Sigh. Monday.

  • 9
    Posted Monday, 10 January 2011 at 3:46 pm | Permalink

    There was a story in the current Cosmopolitan magazine (please don’t ask how I know this) about spying on your boyfriend. Included a quote from someone who worked at a phone store, and used to browse her boyfriend’s call and SMS history — and then, if she saw a number she didn’t recognise, would browse the database to find out who it belongs to.

    That’s abhorrent behaviour from the Cosmo commenter, but also entirely predictable behaviour given this security. For what possible reason could front-line staff at a phone store require my call and SMS history?

  • 10
    db
    Posted Monday, 10 January 2011 at 4:14 pm | Permalink

    The comment on in the article on how passwords are typically resolved is oversimplified. The computer that works out is you should be allowed in or not doesn’t actually know your password either.

    What it does know is that if it does a few mathematical operations with fairly large numbers to your password it should exactly match something the computer has stored - your scrambled password effectively (often called a password hash). Working backwards and unscrambling it is similar to trying to unscramble an egg. Even someone with full access to that stored password information would need to take a very long time to try to find out what it is unless it can be obviously guessed by another means (eg. name of pet). Of course an attacker can feed an entire dictionary of likely passwords in to see what matches, but if that it doesn’t work it comes down to “brute force”, where every possible password is checked one at a time to see if they match. With a lot of very fast computers and typical password encryption this may take years. That’s why the author mentioned “deploying spook-agency-grade computing resources”.

    A computer does not need to know your password at all so storing the passwords in plain text for anything important is an unforgivable act of laziness. Even pacemakers based around a CPU originally designed in 1976 (Zilog Z80) have enough processing power to handle such secure passwords.

  • 11
    Posted Monday, 10 January 2011 at 4:22 pm | Permalink

    Thanx db

    Now that you explain how much trouble my employer’s IT department goes to secure our passwords I’m much more tolerant of its seemingly ‘inconvenient’ insistence that we change our passwords frequently and its ‘annoying’ rejection of passwords that it considers too weak. I’m even feeling guilty that my password only just scrapes a pass on the weakness test.

  • 12
    ronin8317
    Posted Monday, 10 January 2011 at 5:43 pm | Permalink

    It is the PUK (PIN Unblocking Key) that the operator have access to, which allows the PIN number to be reset for the SIM card.

    Using something like ‘MD5’ is great in theory, however I am yet to talk to an operator over the phone who cannot see my password/PIN, whether it’s for mobile, internet account, or banking.

  • 13
    wbddrss
    Posted Monday, 10 January 2011 at 8:27 pm | Permalink

    The complexity around securing a password goes something like this.

    1) the password is stored on server in a database in an encrypted form. The server would be protected by password access and that password would be protected. All audit logs to the database for access privileges would be encrypted.
    2) Before accessing encrypted password to send over intranet or internet it could be decrypted. However this would mean sending it in clear text. So this is not good therefore send it encrypted. The encrypted password would need to be encrypted. So it is encrypted twice.
    3) It is not wise sending anything over an intranet or internet unless the channel was itself encrypted. This is the third encryption. Especially if it was an internet you would never use anything else but an encrypted channel. For an intranet if the password was highly protected or secret or top secret you would expect intranet to be encrypted channel. Like in DoD separate networks.

    4) Now between buildings & between cities you would expect optical fibre and again these would be encrypted. This would be 4th time it would be encrypted.

    5) Tell me I am wrong.

    6) If PKI was used I would expect a certification process based on a pyramid. This way digit certificate, an encrypted text file on computer which has keys for public or private keys in the identity verification process. Lets face it I am lost. What the hell was Vodfone thinking. I must be wrong. I really think the whole IT area needs one big risk assessment. I am just glad I am not programming and responsible for all those keys.. In PKI, keys are transmitted as well. Encrypted.

    Tell me I don’t know what I am talking about because that is a very expensive process that is highly automated. Really I think I have gone too far. In a risk assessment do we need all this encryption. Peer review please.
    I am no expert and most of this I know from corridor talk. In practice an auditor would have to go further than interviews and do some serious IT checking & verification. Not easy & not cheap.

    wbddrss

  • 14
    Martin Barry
    Posted Monday, 10 January 2011 at 8:30 pm | Permalink

    In regards to identity documents, I think a good comparison is the 100 point ID check at a bank.

    It might have changed since I last went through it but they sight the documents and record only the unique ID of it. No photocopy, no other details.

    In regards to the Vodafone issue, surely any database that can be logged into from anywhere on the Internet needs to be secured with two factor authentication? Usernames and passwords can be lent out. Secure token key fobs are a little harder to lend.

  • 15
    GlenTurner1
    Posted Monday, 10 January 2011 at 9:22 pm | Permalink

    Passwords have had their day. Most programmers fail to realise this. Those programmers that do find that Australia’s penalties for privacy breaches are so low that they cannot argue for the company spending money on authentication devices to ameliorate a financial risk.

    What is really required here is a penalty so large as to make authentication devices look cheap. Part 13 of the Telco Act has criminal penalties for privacy breaches. As we’ve seen with the huge reduction in workplace deaths, nothing concentrates an executive’s mind more than the thought of jail. So, what’s ACMA doing?

  • 16
    FelineCyclist
    Posted Monday, 10 January 2011 at 11:13 pm | Permalink

    I agree with Ducklin - keeping a copy of ID is absolute rot and unnecessary. It also beaches National Privacy Principle 1 - personal information is not to be collected unless the information is necessary for one of the company’s functions or activities. VF’s function or activity in this instance is signing up new customers and/or complying with legislation regarding ID checks. It only needs to view the ID for this purpose, not retain a copy. Collecting the information is NOT necessary for keeping tabs on staff. As already pointed out above, this can be done with spot cheeks and audits.

    This issue highlights a real weakness with the Privacy Act generally. Without a strong enforcement mechanism (eg a right to sue for breach of privacy), the interpretation and application of the NPPs is largely left to the companies themselves. It will be interesting to see what the investigation of the Australian Privacy Commissioner finds and, more importantly, what actions are taken as a result of those findings.

  • 17
    Richard Schmidt
    Posted Monday, 10 January 2011 at 11:53 pm | Permalink

    Who said the password was stored as clear text?
    The Fairfax article doesn’t mention it.
    The only mention of passwords in the article was about the staff login & passwords, and it mentioned that those were a secure login process.
    I think Fairfax is also a bit disingenuous by referring to a Vodafone-specific internet site as the Internet, when by common practice, any web portal that needs a valid internal company-based name & password is an Intranet.

  • 18
    ian
    Posted Tuesday, 11 January 2011 at 10:48 am | Permalink

    Being able to view a customer’s PIN is just plain wrong. Passwords — and a PIN is just a password — should never be stored in their unencrypted form.”

    This is kind of ironic being written on crikey.com.au, which happily sends me back my password in a plain text email should go I go and request it. I reported this ages ago, and just tried it again (try the “get my password” link on http://offers.crikey.com.au/christmas-offer-renewal/ for example)

  • 19
    Astro
    Posted Tuesday, 11 January 2011 at 11:39 am | Permalink

    To have a secure file available over the net and not a secure internal portal is criminal. their CFO and CIO should be terminated.

  • 20
    ronin8317
    Posted Tuesday, 11 January 2011 at 12:05 pm | Permalink

    I guess nobody here ever worked as phone support?

    The creditcard details/passwords are stored in the database in an encrypted form. This is to prevent some rogue IT guy doing a data dump and selling them to some Russian hacker. However passwords/security phrase/etc always appear unencrypted because typing it in ‘character by character’ doesn’t work over the phone. Access is audited to make sure none of the operator is stealing personal details. Obviously, Vodafone didn’t have an auditing system, otherwise there would not be a story.

    World of Warcraft has a better online password authentication system than most of our banks. >_<

  • 21
    Malcolm Young
    Posted Tuesday, 11 January 2011 at 1:07 pm | Permalink

    @ronin8317:

    Unfortunately using MD5 wouldn’t be that great at all. MD5 is considered cryptographically broken much like the SHA1 algorithm.

    Someone posted the correct way to handle this in the comments above. Passwords should NEVER be stored encrypted or unencrypted. Instead a digest (or hash) of the password, along with a random ‘salt’ value (to reduce the likelihood of collisions) should be stored. When a password needs to be validated, the salt value is applied and the result is hashed. If the value matches that which is stored then the password is OK.

    Nowadays, nothing less than SHA256 should be used for this. There are stronger algorithms such as SHA512 but they tend to be computationally expensive. Even SHA256 is likely to be vulnerable to realistic brute force attacks sometime in the near to mid future. This is why NIST is currently running the SHA3 competition to identify hash algorithms that are considered strong enough for future applications.

    Cheers,

    mal

  • 22
    Shane Murphy
    Posted Tuesday, 11 January 2011 at 1:31 pm | Permalink

    I think that you are missing some points in this discussion:

    1. The PIN referred to is the customer account access PIN only - it is used for identifying a customer calling in so that they can make changes or request information from the account - so it has to be visible to the staff member/dealer for verification. PUKs are also visible and are only given out after verifying the PIN. Mobile PINs aren’t ( or were not) visible or accessible from these systems.

    2. Fraud in the mobile industry is an ongoing concern. There are significant trust issues between dealers and the telco - so the telco will usually insist on documentary proof of ID to validate a commission claim payment. That currently appears to be the only way to minimise staff/dealer assisted fraud.

    3. There is alway a fine balancing act between being able to serve customers and get services working, and following security protocols in a service environment. It already takes 30 to 60 minutes to sign up a standard mobile contract. This pushes everyone’s limit of patience and perhaps explains if not excuses security lapses - especially with highish staff turnover.

    So while VFA have been potentially ( and i would be unsurprised if this sort of laxity is practised across the board in any organisation outside of a security firm) lax - they are limited by legal requirements and sheer practicality in getting custeroms serviced as quickly as possible.

  • 23
    Malcolm Young
    Posted Tuesday, 11 January 2011 at 1:38 pm | Permalink

    @Shane Murphy:

    Hi Shane,

    I don’t know a single thing about the telco industry but I’m not sure I agree with you on your first point. As I stated, in my comment you don’t need to know a PIN in order to verify it - the customer tells it to you, you type it in, it’s hashed and compared against the hash in the database. If they match, it’s the right PIN - all done without having to store the actual PIN itself.

    As for your other points, I really can’t comment, however my experience is that there is very often tension between getting business and security requirements balanced and it is often much more difficult than it may first appear.

    Cheers,

    mal

Womens Agenda

loading...

Smart Company

loading...

StartupSmart

loading...

Property Observer

loading...