Select Page

Ethical No Brainer

We are in the security business and making money off of vulnerabilities is what we do.  We search for vulnerabilities, we patch vulnerabilities, we implement workarounds for those that have no patches and we work to protect our clients against known and even suspected vulnerabilities.  Vulnerabilities are what make us money.  Without vulnerabilities in software and hardware we would have a lot less work.  However, our goal is to protect our clients, not to exploit them for all that we can get, so when we find a vulnerability, the question of whether we report it to the manufacturer, or look to make as much money as we can off of it, is moot.  Not so for all security firms.

 

Gizmodo reporter Christina Warren called it ‘bats*** insane’ in her post when security firm MedSec decided to partner with an investment firm and sell a manufacturer’s stock short before disclosing security vulnerabilities in the manufacturer’s products.  MedSec tested a variety of medical products by manufacturer St. Jude.  MedSec found security failures including a lack of encryption and the ability for other devices to communicate with pacemakers and defibrillators, which, MedSec claims, could allow anyone to tap into implanted devices and cause potentially fatal disruptions.  Very bad.  ‘Denial of Service’ takes on a new meaning when the service is your pacemaker.  Instead of reporting the findings to St. Jude, the normal approach to vulnerability disclosure in medical technology, MedSec partnered with Muddy Waters Capital and bought St. Jude stock before releasing the results of their findings to the public.  MedSec’s Chief Executive Officer Justine Bone said St. Jude’s past record of ignoring warnings and the chance that it could sue MedSec to keep it quiet ‘precluded that approach’.  MedSec and Muddy Waters cited a 2014 Homeland Security investigation into St. Jude and other device makers’ cybersecurity, reported by Reuters, as a warning that could have been heeded, but that St. Jude did not.

 

The CEO of MedSec defended her company actions in an interview with Bloomberg, but didn’t convince me one bit of their genuine efforts to protect patient safety.  Watch the interview yourself and you will see a lack of due diligence, how Bone (the CEO) skirts the issue of their lack of process, says “the public has a right to know…” but did not report their findings to the public entity CERT, and then finally states the real reason for their tactic; “…we are looking to recover our costs…”.

 

There is a word for MedSec’s actions.

 

Shameful is the word I am thinking.  Unethical does not go far enough.  Scurrilous might fit, bordering on despicable.   MedSec didn’t try very hard to contact, or convince the manufacturer of the seriousness of the findings and then tried (and maybe succeeded) to turn a profit on the venture.  St. Jude responded to specific findings, defending their products and noting that many of the test situations outlined by MedSec don’t exist for devices in production.  The MedSec findings report claimed that the pacemaker battery could be depleted remotely at a 50-foot range, but St. Jude responded that once the device is implanted into a patient, wireless communication has an approximate 7-foot range.  To quote St. Jude, “In addition, in the described scenario it would require hundreds of hours of continuous and sustained “pings” within this distance. To put it plainly, a patient would need to remain immobile for days on end and the hacker would need to be within seven feet of the patient.”

 

In addition to their published response to the MedSec/Muddy Waters high-tech tactics St. Jude also responded in a low tech way, they filed suit.  We’ll see how this plays out in court.

Do You Trust Your Co-Worker?

He does look a bit shifty, but, more likely, he is clumsy rather than dishonest.  A survey of 3,000+ employees and IT practitioners across the U.S. and Europe by the Ponemon Institute. and sponsored by Varonis Systems, reported three out of every four organizations have been hit by the loss or theft of important data over the past two years.  This is an increase over 2014 and is due in large part to compromises in insider accounts.

 

The survey reported that 76 percent of IT practitioners say their organization experienced the loss or theft of company data over the past two years, up from 67 percent in the 2014 version of the study.  Respondents reported that insider negligence is more than twice as likely to cause the compromise of insider accounts as any other culprits, including external attackers, malicious employees or contractors. When a data breach occurs, 50 percent of IT respondents say insiders who are negligent and most likely to cause a compromise.

 

Other things to worry about:

 

  • Outside attackers who compromise insider credentials worry 58 percent of IT respondents
  • 55 percent of respondents say insiders are negligent
  • 78 percent of IT respondents are extremely or very concerned about the threat of ransomware
  • 15percent of the companies represented in this study have already experienced ransomware
  • 54 percent were able to detect an attack within 24 hours (good news!)

 

88 percent of respondents say their jobs require them to access and use proprietary information such as customer data, contact lists, employee records, financial reports, confidential business documents, software tools or other information assets. This is an increase from 76 percent of respondents in 2014.  Apparently this is worrisome to those surveyed as 62 percent of end users say they have too much access to confidential corporate data.

 

If you think that you have access to more data than you need, ask your sys admin to cut you off.  It’s like trimming your nails; keep pruning until it hurts a little, then stop.  We should also be warning our clients about giving too many permissions away to users who do not need them.  If we want to be truly proactive, we can run scripts that report which users are active and which ones have admin permissions, then review the results with the customer.

 

You can review the whole enchilada of survey results out in the wild here.

Every Other One

Eeny meeny miny moe, I’ll steal your data then I’ll go!  A recent study by the Ponemon Institute, reports that more than 50% of Small to Medium Businesses (SMBs), that’s every other business folks, were breached in the past year.  The study was sponsored by password management provider Keeper Security and consisted of 600 IT leaders at businesses with between 100 and 1,000 employees.  The study found the following:

 

  • Confidence in SMB security is low, with 14% of the companies surveyed rating their ability to mitigate cyber-attacks as highly effective.
  • 50% of respondents reported that they had data breaches involving customer and employee information in the last 12 months.
  • Three out of four survey respondents reported that exploits have evaded their anti-virus solutions (social engineering?).
  • 59% of respondents say they have no visibility into employees’ password practices and hygiene.
  • 65% do not strictly enforce their documented password policies.
  • Insufficient personnel, budget and technologies are cited as the primary reasons for low confidence in cybersecurity.

 

The causes for most of the breaches were negligent employees and contractors, though for almost one third of the companies surveyed the root cause of their data breach was a mystery.  What can we take away from this?  Stress, re-stress and repeat more of the basics to our clients:

 

  • Create and enforce a comprehensive employee decommissioning process so that credentials for former employees are disabled or removed as soon as the employee is no long with the firm.
  • Limit access for employees and contractors to only what they need and no more.  Stop handing out administrative rights left and right.  Right?
  • Set and enforce a comprehensive password policy that forces employees to change their passwords.  If forced to change passwords on a regular basis, employees will be less likely to use their Facebook password as their employee password.
  • Start and continually run a phishing awareness program.  The program has to be sustainable and continuous in order to be effective.
  • Separate company and guest wireless traffic.  Allowing guest devices onto the one and only wireless network is an invitation to share digital diseases with someone’s personal phone.

 

More information on the Ponemon survey can be found here.

Cybercrime Overtakes Traditional Crime in the UK

The U.K.’s National Crime Agency (NCA) warned in its Cyber Crime Assessment 2016 that cybercrime is now more prevalent than other crime in the U.K.  Cybercrime was reported to be the largest proportion of total crime in the U.K. in 2015 with “cyber enabled fraud” at 36% of all crime reported, and “computer misuse” at 17% of all crime reported.  One explanation for the growth of cybercrime is that tracking of cybercrime has improved, and, also, the U.K. Office of National Statistics started including cybercrime for the first time in 2015 in its annual Crime Survey for England and Wales.

 

Per the NCA’s report:

 

“The ONS estimated that there were 2.46 million cyber incidents and 2.11 million victims of cyber crime in the U.K. in 2015”

 

Another explanation for the growth of cyber fraud and computer misuse in the U.K. is due to convenience as much of the cybercrime comes out of Russia which is only two time zones ahead of the U.K.

 

“Why would you want to stay up all night doing online fraud against banks in the U.S. when you’d rather be out drinking with your buddies?”, says Avivah Litan, a fraud analyst with Gartner Inc.

 

Because Bad Actors like to have drinks with their friends too, I guess.  For more detailed information, please take a look at the write-up by the ever-prolific Brian Krebs here.

“Not OK, Google”

A group of researchers from Georgetown University and UC Berkeley have demonstrated how voice commands hidden in YouTube videos can be used by malicious attackers to compromise smartphones.  The attack works against phones that have the Google Now, or Apple Siri voice command feature turned on.   The researchers demonstrated that verbally obfuscated voice commands that sound unintelligible to human listeners can be embedded in videos and interpreted as commands by smartphones.  The infected video can be from any source that plays out loud within detection range of the smartphone.  Sources tested include a laptop, a computer, a smart TV, another smartphone, a tablet, or even a speakerphone as demonstrated in this video.  The attack will even work with background noise.  The video demonstrates the use of a mechanical voice, translating written commands, through a speakerphone ten feet from a phone that has the “OK, Google” voice command feature enabled. More details about the attack and possible defenses can be found in this paper, and more attack demos can be found on this site.  Even more information can be found in this article here by Help Net Security, or even more-more information can be found from our Sophos friends here.

 

You can turn off the “OK, Google” feature by following these steps:

  • Open the Google app.
  • In the top left corner of the page, touch the Menu icon.
  • Tap Settings > Voice > “OK Google” Detection.
  • From here, you can disable your phone to listen when you say “OK Google”.

 

You can disable Siri using these steps:

  • Open the Settings app in iOS and go to “General”
  • Tap on “Siri” and near the top of the screen, toggle the switch next to “Siri” to the OFF position.
  • Confirm that you wish to disable Siri completely by tapping on “Turn Off Siri”
  • Exit out of Settings

 

These features come automatically enabled on most smartphones, so please check yours and warn your clients.