UK hospital meltdown after ransomware worm uses NSA vuln to raid IT

Docs use pen and paper after computers scrambled amid global outbreak

Source: UK hospital meltdown after ransomware worm uses NSA vuln to raid IT

<<UK hospitals have effectively shut down and are turning away non-emergency patients after ransomware ransacked its networks.

Some 16 NHS organizations across Blighty – including several hospital trusts such as NHS Mid-Essex CCG and East and North Hertfordshire – have had their files scrambled by a variant of the WannaCrypt, aka WanaCrypt aka Wcry, nasty. Users are told to cough up $300 in Bitcoin to restore their documents.

Doctors have been reduced to using pen and paper, and closing A&E to non-critical patients, amid the tech blackout. Ambulances have been redirected to other hospitals, and operations canceled.>>

A suitable Disaster Recovery applying field-proven MPO methods are called for in case your IT is struck by fallout from the NSA vulnerabilities and tool-collection that have been dumped by hackers in the open after not being rewarded by solvent buyers.

The missing OPSEC in the US cyber-activities have taken their toll, and we are well advised to protect our systems, with a good measure of security for disaster recovery. If you feel you could do with a suitable DR and incident response plan, and are a NHS entity, call

+44.1617 38 1243 

or ask us for a risk- and cost free telephone audit by one of our executive directors, followed by a workshop and PoC on-site on demand. Our DR/IR solution can save valuable time after rogue patches, ransomware or all kind of nasty cyber-infections.



The Ransomware-Attack Has Hit Us! Now What?

Applying IT Disaster Recovery/Business Continuity


During the development of Security Policies for an organization that was at the beginning of a journey towards safeguarding their own (and their customer’s) data, we were never asked the most obvious question: What if a disaster strikes and we have to go through this? Everyone was so concentrated on determining the input, that the obvious thought did not appear: What do we really do when disaster strikes in particular? All good and well having a great policy, but we need to get cracking at implementing technical solutions that help us act when the unthinkable really occurs. Based on the policy, strategies to act in accordance to the policies emerge. And sometimes, like in this case, new solutions to issues at hand are discovered.


source: Carnegie Mellon[1]



A Business Continuity Plan or BCP is how an organization guards against future disasters that could endanger its long-term health or the accomplishment of its primary mission. The primary objective of a Disaster Recovery plan (a.k.a. Business Continuity plan) is the description of how an organization has to deal with potential natural or human-induced disasters. The disaster recovery plan steps that every enterprise incorporates as part of business management includes the guidelines and procedures to be undertaken to effectively respond to and recover from disaster recovery scenarios, which adversely impacts information systems and business operations. Plan steps that are well-constructed and implemented will enable organizations to minimize the effects of the disaster and resume mission-critical functions quickly.[2]

According to NIST, The DRP only addresses information system disruptions that require relocation[3]. (Source: NIST). For our short analysis, we will treat the two terms as meaning the same – it is not quite necessary (or possible) to invest in an alternative location like a container data-center in all cases.

Businesses should develop an IT disaster recovery plan. It begins by compiling an inventory of hardware (e.g. servers, desktops, laptops and wireless devices), software applications and data. Unfortunately, inventories pose an issue more often than not. Having a complete and up-to-date asset list is not supported in the way needed and desired for a disaster recovery plan. Most tools on the market support only a limited number of operating systems, and non-smart assets are a tedious manual workload. The toolset R&P offers underpinning our field-proven managed/military project office, is operating system agnostic and provides real-time information back on HW-assets, SW-assets (all operating systems), firmware-assets, BIOS/EFI, router/switch-configs, printer-queue/print-server configuration and by way of the newest addition, also provides access to firmware versions of displays and further non-smart assets.

From this inventory, it is fairly easy to identify critical software applications and data and the hardware required to run them. Using standardized hardware will help to replicate and re-image new hardware. Ensure that copies of program software are available to enable re-installation on replacement equipment. Prioritize hardware and software restoration[4]. (Source: HLS US)


Phases of building a BC- or DR Plan


Phase I – Data Collection

– the project should be organized with timeline, resources, and expected output

– the business impact analysis should be conducted at regular intervals

– a risk assessment should be conducted regularly

– Onsite and Offsite Backup and Recovery procedures should be reviewed regarding suitability and performance

– Alternate site locations (if any) must be selected and ready for use


Phase II – Plan Development and Testing

– development the Disaster Recovery Plan (DRP)

– Test the plan (regularly)


Phase III – Monitoring and Maintenance

– Maintenance of the plan by way of updates and regular reviews

– Periodic inspection or audit of DRP

– Documentation of any changes


There is – of course – need to introduce to staff any necessary information about the plans and train them on it, otherwise, staff cannot oblige to the rules once a critical situation hits.


Disaster Recovery Plan Criteria

A documentation of the procedures as to declaring emergency, evacuation of site pertaining to nature of disaster, active backup, notification of the related officials/DR team/staff, notification of procedures to be followed when disaster breaks out, alternate location specifications, should all be maintained. It is beneficial to be prepared in advance with sample DRPs and disaster recovery examples so that every individual in an organization are better educated on the basics. A workable business continuity planning template or scenario plans are available with most IT-based organizations to train employees with the procedures to be carried out in the event of a catastrophe occurring[5].

Recovery strategies should be developed for Information technology (IT) systems, applications and data. This includes networks, servers, desktops, laptops, wireless devices, data and connectivity. Priorities for IT recovery should be consistent with the priorities for recovery of business functions and processes[6]. (Source: HLS US)

Downtime can be identified in several ways[7] (Source NIST):




The longer a disruption is allowed to continue, the more costly it can become to the organization and its operations. Conversely, the shorter the return time to operations, the more expensive the recovery solutions cost to implement[8].






(Note that R&P excel in cost-reduction of systems recovery)

IT Recovery Strategies

Information technology systems require hardware, software, data and connectivity. Without one component of the “system,” the system may not run. Therefore, recovery strategies should be developed to anticipate the loss of one or more of the following system components:

– Computer room environment (secure computer room with climate control, conditioned and backup power supply, etc.)

– Hardware (networks, servers, desktop and laptop computers, wireless devices and peripherals)

– Connectivity to a service provider (fiber, cable, wireless, etc.)

– Software applications (electronic data interchange, electronic mail, enterprise resource management, office productivity, etc.)

– Data and restoration[9] (Source: HLS US)


Impact Analysis

The impact analysis should identify the operational and financial impacts resulting from the disruption of business functions and processes. Impacts to consider include:

  • Lost sales and income
  • Delayed sales or income
  • Increased expenses (e.g., overtime labor, outsourcing, expediting costs, etc.)
  • Regulatory fines
  • Contractual penalties or loss of contractual bonuses
  • Customer dissatisfaction or defection
  • Delay of new business plans

in case of corporate businesses and in similar ways for public services.[10]


Testing and Maintenance

The dates of testing, the disaster recovery scenarios, and plans for each scenario should be documented. Maintenance involves records of scheduled review on a daily, weekly, monthly, quarterly, yearly basis; reviews of plans, teams, activities, tasks accomplished and complete documentation review and update.


In case of an incident

These are the recommended three steps in case any incident happens, be it a hacking attack or other malevolent cyber-incidents (e.g. ransomware hitting the organization), malfunctioning software- or operating-system-updates or faulty firmware, BIOS or software patches:

– Identification

– Containment

– Eradication (A good example of actions performed during the eradication phase would be using the R&P-provided toolset which allows for an individual recovery of each complete system end-to-end). Professional services close the attack-vectors, but at this point, it is of the essence not to lose time with forensic or analytical work. If necessary, the R&P tools may perform cloning of affected systems for analytic use later.

– Recovery (bring affected systems back into the production environment carefully, as to insure that it will not lead another incident. It is essential to test, monitor, and validate the systems that are being put back into production to verify that they are not being re-infected by malware or compromised by some other means.)

– Lessons Learnt[11] (well, this is the task of documentation everyone hates, but it is essential for future reference)



This checklist helps to make sure all boxes are ticked in case the incident hits you:

– Stop the attack in progress.

– Cut off the attack vector.

– Assemble the response team.

– Isolate affected instances.

– Identify timeline of attack.

– Identify compromised data.

– Assess risk to other systems.

– Assess risk of re-attack.

– Apply additional mitigations, additions to monitoring, etc.

– Forensic analysis of compromised systems.

– Internal communication.

– Involve law enforcement (if you are not law enforcement yourselves).

– Reach out to external parties that may have been used as vector for attack.

– External communication.


Getting rid of assumptions as a winning strategy

Summarizing, here are the five major points to consider:

  1. Repetitive probing and repeated tests of IT security will deliver facts and figures vs. a false feeling of safety
  2. Generally speaking, the lead time to recovery of any of your configurable items (CI) is the best possible recovery time. Any company can be out of business quick, if incapable of returning to an operational state. If Deutsche Bank is not operational one day, it is their doomsday. Security tests will deliver unpleasant facts on IT –assets formerly deemed safe. Take 20 minutes to return to normal as a goal.
  3. Companies lose customers due to vanished trust in their capabilities (e.g. repeated outages or ability to adapt. Public services sometimes have even more critical usage and depend on minutes. Using experiences from R&P public sector/HLS experiences is not a bad idea.
  4. The shortcut in implementing disaster recovery is to implement a proper DR capability already in the early planning phase.
  5. The second best strategy is not to lose time over reviewing existing IT infrastructure and enhance it by applying the R&P MPO-toolset.



rup-contactRoth & Partners have significant experience in the above 5 topics and the capability to support IT experts globally in their challenge to enhance IT security systems. Your advantage: Give us a bell at one of our centers or write a mail:































Microsoft SCCM as the ride a malicious hitchhiker would love

sccm-vUsing a Microsoft-Tool for Software- and patch-distribution can’t be wrong, can it? I have tried a while to come up with a couple of good reasons to use SCCM, but after this mental exercise I just feel less inclined to recommend it.

The Redmond company has invested many a Dollar in making randomness part of our operating systems. As a test, start off with three computers, same build and model, identically configured and install the OS on them at the same time, only to see that the automated (non SCCM-driven) update-process for each of them takes a different turn – a wonderful proof of the randomness in Microsoft enhancing our dull lives with the IT equivalent of a one-way, dead-end street ever so often.

The same company brings us a wonderful patch- and release-management tool:

SCCM is a platform that allows for an enterprise to package and deploy operating systems, software, and software updates. It allows for IT staff to script and push out installations to clients in an automated manner. You can kind of think of it as a customizable, scriptable WSUS.” -Matt Nelson, Veris Group

SCCM operates in a “Client-Server” architecture by installing an agent on workstations/servers that will check in periodically to its assigned site server for updates. On the backend, SCCM uses a SQL database and WMI classes to store and access various types of information. (Source)

In a perfect world, such an institution would increase the health, security and availability of our systems.

As it turns out, the world we live in is not exactly what we can call perfect. Not even close. This is, where the randomness comes as a bit of an additional hindrance to arrive at the level of health, security and availability we desire for our systems – especially in a case, where we are not dealing with a red team in a Red Team engagement, but with real organization data predators. We do replace the update randomness by installing an even more worrisome vulnerability.

In short this is the risk: “If you can gain access to SCCM, it makes for a great attack platform. It heavily integrates Windows PowerShell, has excellent network visibility, and has a number of SCCM clients as SYSTEM just waiting to execute your code as SYSTEM.”

Think about what a malicious actor in an IT-service organization could do after getting access to SCCM. Think about inserting payloads. Check out PowerShellEmpire  for more information.

For any attacker that has gained domain admin to a network, using any centralized administration software that is part of the Microsoft universe and is actively participating in it, is like finding the golden pot at the end of the rainbow. An attacker can find nodes and map out the network, see where users spend most of their time and find critical assets.

So it didn’t take long before someone came up with some smart ideas on how to deploy payloads and manipulate the assets by using SCCM. In order to interact with SCCM, you need to have administrative rights on the SCCM server, however you do not need to have domain administrative rights on the network. SCCM is a cornerstone for an attacker to stay under the radar in a Red Team situation, e.g. to persist in the network after elevated access is achieved. PowerSCCM is one of these nifty developments Red Team members came up with (in this case, check out enigma0x3 – you might actually find more than you expected).

It takes a few steps to deploy malicious packages/scripts to clients through SCCM, and offensive manipulation/deployment is only currently supported through WMI SCCM sessions. SCCM deployments need three parts- a user/device collection of targets, a malicious application to deploy, and a deployment that binds the two together.

As a countermeasure, the easiest one is avoiding SCCM in critical environments completely and instead opt for a tool that does not participate in the wonderful, rainbow-colored world of Microsoft for patch-, release and disaster-recovery. By doing so, you might find out, that you are at the same time saving quite a bit of project-management time and cost.

Big Data, IoT and Healthcare: Part II

In the first part of this blog series, I covered how the healthcare industry can benefit from IoT and Big Data. In this week’s post I’ll take a more in-depth look into the potential security issues associated with Big Data and what the future holds for healthcare IoT.

From cars and hotels, to consumer goods like lightbulbs and watches there is a growing network of everyday objects connected to the Internet.  These sensors and devices generate nonstop streams of data that can improve personal and professional lives in many ways. However, with the billions of data generating devices being used and installed around the globe, privacy and data protection are becoming a growing concern.

Recent attacks by cybercriminals within healthcare sectors demonstrate that companies cannot ignore potential threats in their design or decision making processes. Just last year, as many as 80 million customers of Anthem, the United States’ second-largest health insurance company, had their account information stolen. The hackers gained access to Anthem’s computer system and stole names, birthdays, medical IDs, Social Security numbers, street addresses, e-mail addresses and more. This was the second major breach for the company.

It’s evident that there is a growing need to find a way to effectively manage privacy and security in Big Data. While there are innovative and accessible analyst tools such as MS Polybase and Impala, one of the challenges is hiring and retaining qualified data analysts.  Another challenge is the exponential growth of Big Data and the minimal structure, lack of standardization and lack of willingness to standardize.

So how do we address these challenges?  Businesses across all industries need an extensive platform that can manage both structured and unstructured data with security, consistency and credibility. A great example, and unexpected entrant into this niche market, is the SQL and Hadoop data warehouses offered by Microsoft.  These systems double-check validity, handle all types of data, and scale from terabytes to petabytes with real-time performance.

According to a new report, by 2020, the healthcare IoT market will reach $117 billion. Based on this report, one thing is clear: Aging and remote healthcare is going to be a demographic necessity rather than a mere opportunity. An example of where IoT/Big Data is making a difference is the innovative combination of connected healthcare devices and data sciences, such as fall detection alarms in elderly and home care situations.

As this trend continues, different sectors of the industry will merge and work together in order to deliver advanced digital solutions and embedded devices to the healthcare industry. As Big Data within healthcare IoT continues to grow, it will also lead to more and more job opportunities for developers with a knack for data science, data mining and/or data warehousing. With this need comes an opportunity for new businesses to emerge.

IoT and Big Data represent part of the fourth industrial revolution: The first being steam and mechanical production, the second division of labor, the third electronics and IT and the fourth being interconnected cyber-physical systems.

What does this mean for the healthcare sector? Recently, a company equipped their employees with Fitbit wearables and gathered mining data that was delivered by the wearable. From this experiment, they learned it was possible to reduce insurance premiums by $300k. By using predictive data from sensors and interconnected devices, GPs, hospitals, national health services and the pharmaceutical industry can create meaningful programs that shape the way patients are treated for years to come.

First published here:

Why we endorse Antsle

Once in a while, things happen that have incredible potential, and a couple of months down the road, everyone thinks “I could have had this idea!”. Antsle is one of them.

Looking like a smallish box, it is actually a blue ocean. It is oriented towards the creative maker-power of the developers, not at the end-user in the first instance (in spite of the fact, that it is easy enough to satisfy these, too). Don’t confuse it with a storage device that carries the misleading name of “cloud”, this is something from a different galaxy.

Just thinking about the endless opportunities, this little sweet box which serves as your home-based internet node and server brings, waters my mouth. A couple of them come to mind immediately:

create a firewall for your IoT device collection

With the advent of all kinds of “smart” things populating your living rooms, from wearables to big-brother-TV-sets, we are all game for the predator-like data collectors. Why not have a control-unit developed which sits in your own cloud server at home in your basement or attic, that acts as a firewall for your IoT units that might send out tons of things you don’t want them to?

Hey, if all the flurry.coms of this world permanently want your information, why the heck don’t they pay you for your data? They could take every other installment for the new TV set, couldn’t they?

At least while your mobile is logged into your home network, you might cut the wire to these data-sucking hoovers that give a toss about your privacy.

Do your own home-automation locally without sharing stuff

A creative developer could come to the conclusion that home-automation is a great idea and having a look at the home via a security cam as well, without having the wish to “share” all this information with potential hackers and the manufacturers, Apple, Samsung, Sony and all kinds of interested third parties, including of course, potential burglars.

There are pretty clever approaches on the market and already enabled and integrated in a lot of connected devices.

However, all this assumes a limitless level of trust between the user and the manufacturers, as well as the cloud operators involveld.

As long as the Agile development cycle is not yet the standard, this trust is not justified at all. We live in a world, where the “Hello Barbie” might silently share kids’ secrets with third parties and without parents having control over what goes out of the home.

So why not put the gearbox for your home automation and connected life into your own hands? For a maker creator, the Antsle is a wonderful platform.

These are just some examples of ideas that come to mind immediately between day and dream.

Not a server, but an ocean

What dreams may come, when the creative folks find this platform and make innovative use of it?

The antsle has the capability to be the gearbox between the SOHO office or home and the world, bypassing the vampires and vultures of the internet and sharing only what was meant to be shared, in a conscious decision process and keeping in store the earlier decisions and making sure the home or office is connected, but not as an open system, but as a controlled perimeter.

The use cases are myriad. The makers can count on many things which are easy to plug into with their ideas and easy to realize. So…..

Antsle is a platform that supports all kinds of small applications running independently from each other in a server-type environment in a coll alumnium box without noise. It is the basis for developing loads of things that can be marketed in an ecosystem.

A good example for this kind of ecosystem is Sonos. They developed a meshed-WiFi-music system, that has since grown into an impressive ecosystem, due to the quality and absolute ease of use and administration. Loyal followers (like the early Apple users) due to a strong committment to backwards compatibility and superior quality of build. This is the basis for makers. This is the mindset of creators.

What’s missing?

Of course, there are always compromises when starting something new. It would be terriffic, if there was a forum or platform, where like-minded developers could meet with regards to the antsle and share ideas, get together, inspire each other, share solutioins for puzzling questions and exchange open source code.

This could be something easily created on one of the new antsles, but of course, it’s not yet there.

Also, while the antsle smart OS is a great idea and has a lot going for it, I would have simply loved to see the concept of qubes-OS taken into the commercial world. Joanna Rutkowska deserves to be lauded for an innovative idea and it could basically be the fifth gear for a security-antsle in the future… Folks, how about this for the next release? Maybe someone in antsle is interested to explore this….

On the other hand, the OS choice is with the creators and it makes a lot of sense in many respects – the team at antsle have researched the necessities down to the question which RAM to use (only ECC will do) and how to assure data integrity and safety (by mirroring SSDs). So they probably have spent months on analysing what would be the best basis. But we do have a lot of sympathy for Joanna.

December 15th, crowdfunding for antsle will start. We will order our first antsle right away and contribute to the crowdfunding. We do hope the spirit of innovation will become inspiring for the developer community and are anxious to see the next news about it.

(Antsle is THE solution for Autonomous Web Hosting! Own your data, run hundreds of VM’s – all in one powerful and 100% silent home-based server.)

Go see: Autonomous Web Hosting – and get it while they are hot and new! Be one of the first and be one of the early flock!

People Hire Hackers To Get The Job Done – Business Insider


Just a few years ago, I was able to overhear a conversation between a hacker and an interested person. This would-be client asked the hacker whether he would be able to hack into the Blackberry he had obtained via court order from a business partner gone bad. I was pondering about the consequences of this happening, both legally and also ethically. Obviously, the borders of moral are fuzzy these days, with everyone and their uncle spying on anyone else if they can, and noone having a bad conscious in services and governments asking for the golden key in encryption or even making encryption unlawful.

“Hacking is so mainstream nowadays that even the most tech-illiterate person can break into his boss’ email address.

While everyone’s focusing on major hacking scandals like Sony and Nasdaq, there’s also a flourishing “cottage industry” of people hiring hackers for espionage — basically like TaskRabbit for petty espionage.

“A new website, called Hacker’s List, seeks to match hackers with people looking to gain access to email accounts, take down unflattering photos from a website or gain access to a company’s database,” The New York Times reports. “In less than three months of operation, over 500 hacking job have been put out to bid on the site, with hackers vying for the right to do the dirty work.”

Hackers do their thing anonymously, while the website holds payments in escrow until the tasks are completed.”

as Business Insider writes.

This leads to a number of considerations. Is mankind indeed just a bunch of snoopy leeches? What happened to laws and legality? Why do we feel entitled to look into the data of all kinds of people, while we would probably not admit to reading our daughter’s diary – and even condemn anyone doing it?

With remuneration for these supposedly illegal activities going through the roof, one cannot but wonder what will happen when all these disgruntled or greedy services employees hit the street? There is no such thing as an ethical hacker. There’s just hackers. Ethics does not come into play, with very few exceptions to the rule, and the setting we are looking in today, are not making the choice easier for anyone gifted in that area.

Surveillance in Oslo Government Quarter shows inadequate mindset to issues

Heise and Aftenposten  both report on IMSI catchers that have been detected in Norways government quarter. Now, given that anybody with a few thousand Euro in their hands can build such an IMSI catcher (which is capable of catching much more than just an IMSI, the unique mobile identification number of a cellphone, but can actually serve as a listening device to conversations and tape streams of voice-data), it is not neglect that is to be seen. In fact, it is an inability to see attacks while they are happening and take adquate measures.

This inability takes two different shapes:

1. technical and

2. mindset

While the technical one is obvious, the localization of an attacker is particularly difficult, depending on the way the attack is staged (active/passive), just to name a few issues, the most problematic one is the problem of mindsets.

First, it has to be assumed, that all the institutions that care about IT or communications security (e.g. BSI in Germany) are biased. They are not up to speed on the current attacks, and they are technologically way behind the attacking parties. Second, they are receiving orders from their political leaders, who live in fear of the foreign three-letter agencies.

The mixture of fear and preemptive obedience disallows our national leaders to take adequate measures in time they are needed, hesitance and blocking the needful are the most prevalent reactions, at least in nations that don’t carry “United States of” in their names.

The TLA and other criminal organizations are running circles around our national organs of safety, while even the slightest investment in means and methods are blocked.

And even if all of this weren’t the case, the mindset of beaurocracy would step in and destroy all and every innovative creativity in getting something positive for the nation done.

So, as a result, the technical is not an issue that couldn’t be resolved. It is the mindset that is the bottleneck to find solutions to the problem of how to catch the catchers (like search & destroy).