Reducing cost of patch management

Both in Security as in the “business as usual” operations, patch management is a financial draught. OpEx in IT has many hidden slings and arrows, and patch-management (and its bigger sibling, release-management) is one of them. Any time (and this is often, nowadays!) a vulnerability is detected in any of the distributed pieces of software, it is recommended to roll out the patches as quick as possible.

In a private environment, following the simple instructions is a piece of cake: automatic detection of patches while connected to the internet will trigger an unobserved or accepted download followed by a few clicks to install and relaunch). Not so in regulated, secured or simply sizeable environments, in which a controlled patch-management may cause many difficulties in the organization, due to interdependencies of security, network and other issues. If, for example, a hospital or a military organization gets loads of blue-screens due to any incompatibility in the above mentioned areas, employees will suffer, valuable time is lost, responsiveness is removed and overall opportunity costs sky-rocket. These damages are hardly calculated, but very well felt by the organization and the budget which is drained by unallocated costs.

Taking input from here, we may follow the calculation as this:

“The cost of any administrative process to a business consists of the following components:

  • The human resource cost (unit time cost of employees committed to the process multiplied by their number) – H
  • The frequency of the process (how often it is executed) – f
  • The time required to execute the process (how long does it take to fully complete all tasks associated with the process, including dealing with failures and retries) – T
  • The scope of the process (how many people/applications/systems are impacted by the process) – S
  • The lost opportunity cost (a reflection of the value generated if the resources consumed and impacted by this administrative process was reallocated to a service) – O

Reducing any of these components will result in a reduction of the total cost of the process and our simplistic model can be expressed mathematically as:

Total Cost = (f x H x T x S) + O

Our model is far from precise however it is a suitable starting place for identifying and quantifying ways of reducing the costs of administrative processes. For example let’s look at how CTI impacts the cost of vulnerability and patch management, as we have noted it can reduce the frequency, time and scope of the process (we’ll tackle lost opportunity later), however it could potentially increase the human resource costs, at least in the short term, as we’ll need to hire and assemble a CTI team. Overall we can clearly see the benefits CTI based on this simple analysis alone.”


Opportunity cost may push the level even higher, and it may be different in varying sectors and organizations:

“At a minimum we could consider the lost opportunity cost as the amount that would have been gained by the business from putting the capital allocated to the administrative process into a savings account. Under free market conditions this would normally result in a gain in line with rate of interest but for the sake of argument lets call it 5%. So we can say the absolute minimum cost of an administrative process will be:

Total Cost = (f x H x T x S) * 1.05

However this is, in my opinion, a highly conservative estimate of the lost opportunity cost.”

Roth & Partners disposes of a tool, generated by a partner with many years of experience in the area and will be able to apply its valuable details to your organization on demand. There are also excellent tools to underpin a “managed program office” (as opposed to the classical PMO-approach that does not use the military mission planning R&P applies).


Some benefits regarding the patch- and release management are summarized in the following figure:


Benefits from using a sensible patch- and release management from a practical example is depicted in the graph below;  The resulting increase in safety, stability, reduction of efforts and costs needed is in line with the above mentioned formula and saved costs can exceed expectation in OpEx.









E-Commerce Migration? Get ready for a headache!

Migrating your shop to a different platform? Get ready for headaches.

The net is full of complaints. Always, regarding any topic, but when it comes to e-Commerce, there is much more irritation than about any other topic, at least so it seems. The reason is that there is never a full comparison to be found (exceptions do exist, such as in Germany, but not completely for altruistic reasons, as they are playing a systems integrator role).

In this overview, we will look at the issues, pitfalls and concerns when migrating an existing e-Commerce solution to a different platform.

First, let’s have a look at the options to choose from. Selection criteria could be, just to name a few, Open Source versus commercial tools, sizing, existing application landscapes may limit or predefine some decisions (e.g. order management, ERP, fulfillment- or payment-solutions). Your choice of geography requires capabilities in language-localization, taking care of the legal-, tax, and licensing-situation (e.g. in China, entering into the B2C market requires a license which is obtained from local authorities). Then the targeted market determines your choice as  omnichannel-, B2B, B2C, B2B2C, or, again in China, your online to offline capabilities matter. It is interesting to note, that the China domestic market has turned digital all the way, leapfrogging quite a number of steps which Europe and the US went through (and are still enduring teething problems with).

Typical issues in different locations are VAT rules and regulations, information that are required by local laws, such as delivery times, information regarding shipment cost, information on prices (including/excluding sales taxes or VAT), T&C which can fundamentally vary for different locales, and support for the laws in the chosen geography may cause extra efforts or even rule out some solutions.

If you wish to support omnichannel, omni-currency and multi-location, you will find yourself growing out of most community editions, and depending on what your market looks like in terms of size, you will need a professionally maintained and supported platform – either as a SaaS or which you host yourself. In such a situation, Magento EE or Intershop may appear to be the systems of choice.

The topmost (both in capabilities and cost, and unfortunately also in implementation times) players are systems which offer (just to name a few items) deep analytic capabilities, which could be called data mining solutions of their own right, and offer price optimization, dynamic pricing across the board, promotion and campaign optimization on a global scale. Complex implementations of loyalty programs, integrated with those offered by complementary providers, or multi-tenant / multi brand implementations. If this describes your requirements, you have arrived in the sweet spot of SAP Hybris, Intershop, Oracle ATG and IBM eCommerce solutions.

On the other end of the scale are an increasing number of flavors to savour:

  • XT:c
  • Oxid
  • woocommerce
  • spreecommerce (Ruby on Rails)
  • bigcartel
  • Squarespace
  • Tictail
  • Shopify
  • Supa Dupa
  • Airsquare
  • Enstore
  • Goodsie
  • Magento Go
  • Storenvy
  • Flying Cart
  • Virb
  • Jimdo (Germany, very small shops only)

and many more. The issue is, sometimes you might hit on something that is an awesome solution that appeals to you and while going at it, you realize it is a one-developer piece of software!

If you don’t intend to spend too much time on any IT-related issues, you might wish to slip into the open spaces that TMall/Taobao (China) or Amazon (elsewhere) offers, but your branding capabilities are fairly limited. You might find the management of stock and prices to be limited, but on the other hand, the stocking, fulfillment, shipping and payment is taken care of easily.

There are more hosted solutions, which do give you a limited choice of individual design and branding, such as Zoho ( and comparable solutions.

Looking at the necessary steps that a migration calls for, we find a number of recurring issues which are annoying when not planned, but can be easily managed.

First of all, during a discovery phase, identify all the requirements that will have to be covered by the new system. Most important issues are the catalogue structure, the scope of data to be migrated and order fulfilment. Now would be also a good moment, early in the process, to take care of Domain Name Issues before doing anything else. This requires a bit of thinking, because the process should be as seamless as possible for returning customers and also for the search engines. At some point in time, the search engines should point at your new domain (which could be the old domain, but then you have to plan accordingly on how to do and when to do the switch. It is recommended to have a time of parallel operation. Mind, that there might be issues with the security certificate, which allows you customers to shop with an SSL protection, needs to be in line with the sites. If your domain name changes, this is not a big deal. Letsencrypt ( offers them free, rather than in the earlier days, when you had to pay a kingdom and a half for an SSL-certificate. Commercial certs are certainly still available.

As a second step, check how does the new platform integrate with your existing business processes and application landscape you have in place? If you use third party vendors to provide your site with essential services, make sure they support your new platform in the same (or better) way than your previous one, and if not, replace them. A sensible design document helps you to cover these issues – or ask your systems integrator to provide you with a design document.

Moving the Data is the challenge number 3, which causes the most headaches. We find it often makes more sense to take a new look at things: It might be the numbering schemes in SKU, invoices which are not in line with expectations. Especially complex combined products with options might end up as a myriad of single products instead of a nicely sorted product scheme with options to choose from.

This could indicate the need to manually correct the items. Have prices stayed correct? Sometimes rounding errors cause changes in noted prices.

In those territories that require different sales taxes for different classes of goods (e.g. Switzerland and Germany) these need some checking of the flags correctly set. Indeed, the different taxes can cost a lot of time to correct, so better look twice!

Graphical assets and the sizes need to be quality assured so that the customers will not be confronted with tiny or huge assets.

By all means, avoid getting any “404”s by checking the URLs in time. If your site has had good organic traffic before, make sure that the page URLs generated by the new platform is the same as before if you can, keep the page descriptions identical and where necessary, install permanent redirects (301 and 302 redirects). Prepare yopur SEO strategy thoroughly! Many stores experience a temporary drop in organic traffic following their migration. This is usually the result of lost meta information and changes in URLs. Prepare your post-migration SEO strategy to minimize these temporary effects, if necessary, employ third parties for this task.

Also make sure that the emails keep coming in regardless of which system you use. Testing email flow should be a separate QA activity, as it can cause a lot of frustration if emails get lost.

Speaking of Quality assurance, this is the most important step to take. All business transactions need to be thoroughly checked before the site replaces the current one. There is nothing stopping you to have the old site up for about a month in parallel, once all business transactions have been tested straight through. At some point in time, it would definitely make sense to have a third party assess the site about all vulnerabilities that a good penetration tester can find (rinse, repeat often!). In most e-Commerce, wherever credit cards are accepted, the PCI-DSS standard needs to be fulfilled in order to avoid hefty fines and risks later. Make sure your new platform handles this issue and is compliant. PCI-DSS standard is available now in V3.

Last step is to do the deployment, the cut-over is done best in the wee hours of the day.

After a while it might make sense to check, whether the performance of the site is in line with expectation. Often, slow sites do not come from Performance-issues in the hosting or the network, but from heavy use of additional scripts, bells and whistles. This is a nice performance test, allowing to simulate a geography to access the service from:

Some vendors offer automated tools for migration to their platform: and

are examples.

In a recent customer, we asked for some feedback on the migration process and found the following points to be of interest:

The customers legacy platform was rigid and did not support a lot of the state-of the art e-Commerce features. They also found a licensing issue that would have caused a budgetary change.

As the online e-Commerce presence was of central importance, they defined a phased strategy in the migration process from legacy to new:

– Build up a framework that covers around 70% functions and features of legacy and launched the store internally to collection feedbacks

– Improve the store and gradually route traffic from legacy to the new store

– Retire the legacy store and continuously improve the new store

The global retailer chose an “Agile” development method – this enabled them to implement the strategy mentioned above in a number of suitable sprints. And it proved being an efficient method as we can convert a still roughly defined idea into a prototype quickly, then fine-tune and improve it. This method dramatically shortened the TAT of feature delivery. In fact, it happened several times, that required features were delivered earlier than scheduled.

Choosing a strong and capable team with “can do” attitude was another key success-factor to ensure the project can be delivered on time with high quality. Shinetech was chosen and is proud about this project, the trust of the customer and the project success

Strong support from management gave the project team guidance and necessary support, to ensure no deviation to meet business goals. Being a global organization, communications among different levels and stakeholders were key, seamless and timely communication by all means (emails, meetings, demos, newsletter etc.) ensured a smooth project implementation without surprises.

Last but not the least, the chosen platform proved to be a flexible and robust software, that eased development complexity and enables future scalability along with business growth.

The management buy-in is central in e-Commerce migration projects and in our weekly management meetings communication was open; critical and helpful feedback was given at all times and by stakeholders all across Asia. As a result, the system implementation was fast. The migration project finished after 4 months, with system stability being extremely good. Did not face any critical issue since launch. Implementation cost and maintenance cost is reasonable.

This article has been published by my partner Shinetech here. Please give the site a visit!


This is an astonishing and highly fascinating blog. Recommended reading!

“Brain of each human being is completely unique. Its structure is highly influenced not only by our DNA but also by everything we experience in our life. You can find people even with the same DNA – but life history is something that cannot be duplicated.

So brain activity is unique biometric every person has. And such biometrics are used in access control systems when security is needed. Simply, you can use your biometric as a password to gain access to resources protected from everyone except you. And brain activity has a lot of advantages over other biometrics traditionally used in access control systems (such as fingerprints). Here are the most important ones:

  • Brain activity is secure. In case of fingerprints, for example, we leave biometric in every place we touch with our hands, so everyone who needs to attack the system can collect & replicate it. Brain, in contrary, is safely hidden inside a skull.

  • Brain activity is changeable. You cannot influence other biometrics – your iris pattern, DNA, heart beats, fingerprints – are determined by nature and there is no easy way to modify them. If system based on these biometrics is compromised once, it is compromised forever. Brain activity, conversely, can be easily changed just by simple thought.”

Source: engineuring

Big Data, IoT and Healthcare: Part II

In the first part of this blog series, I covered how the healthcare industry can benefit from IoT and Big Data. In this week’s post I’ll take a more in-depth look into the potential security issues associated with Big Data and what the future holds for healthcare IoT.

From cars and hotels, to consumer goods like lightbulbs and watches there is a growing network of everyday objects connected to the Internet.  These sensors and devices generate nonstop streams of data that can improve personal and professional lives in many ways. However, with the billions of data generating devices being used and installed around the globe, privacy and data protection are becoming a growing concern.

Recent attacks by cybercriminals within healthcare sectors demonstrate that companies cannot ignore potential threats in their design or decision making processes. Just last year, as many as 80 million customers of Anthem, the United States’ second-largest health insurance company, had their account information stolen. The hackers gained access to Anthem’s computer system and stole names, birthdays, medical IDs, Social Security numbers, street addresses, e-mail addresses and more. This was the second major breach for the company.

It’s evident that there is a growing need to find a way to effectively manage privacy and security in Big Data. While there are innovative and accessible analyst tools such as MS Polybase and Impala, one of the challenges is hiring and retaining qualified data analysts.  Another challenge is the exponential growth of Big Data and the minimal structure, lack of standardization and lack of willingness to standardize.

So how do we address these challenges?  Businesses across all industries need an extensive platform that can manage both structured and unstructured data with security, consistency and credibility. A great example, and unexpected entrant into this niche market, is the SQL and Hadoop data warehouses offered by Microsoft.  These systems double-check validity, handle all types of data, and scale from terabytes to petabytes with real-time performance.

According to a new report, by 2020, the healthcare IoT market will reach $117 billion. Based on this report, one thing is clear: Aging and remote healthcare is going to be a demographic necessity rather than a mere opportunity. An example of where IoT/Big Data is making a difference is the innovative combination of connected healthcare devices and data sciences, such as fall detection alarms in elderly and home care situations.

As this trend continues, different sectors of the industry will merge and work together in order to deliver advanced digital solutions and embedded devices to the healthcare industry. As Big Data within healthcare IoT continues to grow, it will also lead to more and more job opportunities for developers with a knack for data science, data mining and/or data warehousing. With this need comes an opportunity for new businesses to emerge.

IoT and Big Data represent part of the fourth industrial revolution: The first being steam and mechanical production, the second division of labor, the third electronics and IT and the fourth being interconnected cyber-physical systems.

What does this mean for the healthcare sector? Recently, a company equipped their employees with Fitbit wearables and gathered mining data that was delivered by the wearable. From this experiment, they learned it was possible to reduce insurance premiums by $300k. By using predictive data from sensors and interconnected devices, GPs, hospitals, national health services and the pharmaceutical industry can create meaningful programs that shape the way patients are treated for years to come.

First published here:

antsle – Home Hosting: YourCode YourData YourRules | Indiegogo

Here at R&P we are absolutley excited about this Indiegogo activity. The only thing we are still a bit undecided about is whether our plans justify a larger installation right away or whether we should stick with an antsle one red? Autonomous Web Hosting: Regain full control and data ownership, host from home! Dramatic savings.

Source: antsle – Home Hosting: YourCode YourData YourRules | Indiegogo

Why we endorse Antsle

Once in a while, things happen that have incredible potential, and a couple of months down the road, everyone thinks “I could have had this idea!”. Antsle is one of them.

Looking like a smallish box, it is actually a blue ocean. It is oriented towards the creative maker-power of the developers, not at the end-user in the first instance (in spite of the fact, that it is easy enough to satisfy these, too). Don’t confuse it with a storage device that carries the misleading name of “cloud”, this is something from a different galaxy.

Just thinking about the endless opportunities, this little sweet box which serves as your home-based internet node and server brings, waters my mouth. A couple of them come to mind immediately:

create a firewall for your IoT device collection

With the advent of all kinds of “smart” things populating your living rooms, from wearables to big-brother-TV-sets, we are all game for the predator-like data collectors. Why not have a control-unit developed which sits in your own cloud server at home in your basement or attic, that acts as a firewall for your IoT units that might send out tons of things you don’t want them to?

Hey, if all the flurry.coms of this world permanently want your information, why the heck don’t they pay you for your data? They could take every other installment for the new TV set, couldn’t they?

At least while your mobile is logged into your home network, you might cut the wire to these data-sucking hoovers that give a toss about your privacy.

Do your own home-automation locally without sharing stuff

A creative developer could come to the conclusion that home-automation is a great idea and having a look at the home via a security cam as well, without having the wish to “share” all this information with potential hackers and the manufacturers, Apple, Samsung, Sony and all kinds of interested third parties, including of course, potential burglars.

There are pretty clever approaches on the market and already enabled and integrated in a lot of connected devices.

However, all this assumes a limitless level of trust between the user and the manufacturers, as well as the cloud operators involveld.

As long as the Agile development cycle is not yet the standard, this trust is not justified at all. We live in a world, where the “Hello Barbie” might silently share kids’ secrets with third parties and without parents having control over what goes out of the home.

So why not put the gearbox for your home automation and connected life into your own hands? For a maker creator, the Antsle is a wonderful platform.

These are just some examples of ideas that come to mind immediately between day and dream.

Not a server, but an ocean

What dreams may come, when the creative folks find this platform and make innovative use of it?

The antsle has the capability to be the gearbox between the SOHO office or home and the world, bypassing the vampires and vultures of the internet and sharing only what was meant to be shared, in a conscious decision process and keeping in store the earlier decisions and making sure the home or office is connected, but not as an open system, but as a controlled perimeter.

The use cases are myriad. The makers can count on many things which are easy to plug into with their ideas and easy to realize. So…..

Antsle is a platform that supports all kinds of small applications running independently from each other in a server-type environment in a coll alumnium box without noise. It is the basis for developing loads of things that can be marketed in an ecosystem.

A good example for this kind of ecosystem is Sonos. They developed a meshed-WiFi-music system, that has since grown into an impressive ecosystem, due to the quality and absolute ease of use and administration. Loyal followers (like the early Apple users) due to a strong committment to backwards compatibility and superior quality of build. This is the basis for makers. This is the mindset of creators.

What’s missing?

Of course, there are always compromises when starting something new. It would be terriffic, if there was a forum or platform, where like-minded developers could meet with regards to the antsle and share ideas, get together, inspire each other, share solutioins for puzzling questions and exchange open source code.

This could be something easily created on one of the new antsles, but of course, it’s not yet there.

Also, while the antsle smart OS is a great idea and has a lot going for it, I would have simply loved to see the concept of qubes-OS taken into the commercial world. Joanna Rutkowska deserves to be lauded for an innovative idea and it could basically be the fifth gear for a security-antsle in the future… Folks, how about this for the next release? Maybe someone in antsle is interested to explore this….

On the other hand, the OS choice is with the creators and it makes a lot of sense in many respects – the team at antsle have researched the necessities down to the question which RAM to use (only ECC will do) and how to assure data integrity and safety (by mirroring SSDs). So they probably have spent months on analysing what would be the best basis. But we do have a lot of sympathy for Joanna.

December 15th, crowdfunding for antsle will start. We will order our first antsle right away and contribute to the crowdfunding. We do hope the spirit of innovation will become inspiring for the developer community and are anxious to see the next news about it.

(Antsle is THE solution for Autonomous Web Hosting! Own your data, run hundreds of VM’s – all in one powerful and 100% silent home-based server.)

Go see: Autonomous Web Hosting – and get it while they are hot and new! Be one of the first and be one of the early flock!

Why Security Needs To Come First In Software Development: Part II | Shinetech Blog

In the first part of this series, I covered the cost, time and benefits associated with baking security into the software development lifecycle from the beginning. In today’s post, I will discuss how and why we need to bring security into the software development testing process early on. Between 70 and 90 percent of breaches are caused by a software vulnerability, and the remaining percentage is caused by human error. In software development, security has always been an external checkpoint. But if we make security core to the process of software development we will save energy and money. Here’s one way to do it. As part of the QA process, consider running tests on the following:

  • How input is handled (standard pen-testing practices such as fuzzing input, tests for SQL injection, tests for HTTP header injection, etc.)
  • The application logic (e.g. test of reliance on client-side input validation, testing of transaction logic, testing multi-stage process for logic flaws etc.)
  • Authentication logic (e.g. password quality rules, username inventory, resilience to password guessing, etc.)
  • Miscellaneous things such as session handling, attack surface, application hosting issues, etc.

So, what could a secure development cycle look like in terms of Agile, XP or Scrum? Here is one process description that could be the framework for similar processes:

Here are the six most important factors to consider: Security of most applications and websites is faulty or vulnerable, causing potential issues and the loss of funds. According to a recent study, the average cost of breaches is approximately $3.8 million. The cost to implement security grows exponentially the later it is introduced into the process. In IoT, security is focused on even less than in app development. The vendors that are prominently investing into shielding their clients from security risks and threats will be the leaders of the new market niche. In Industry 4.0, the industrial value-generation process is based on networked machinery. The impact of having vulnerable systems or software will be devastating. Your client’s assets and your assets are closely linked. If either is affected by a breach or hack, both might be lost. There has never been a cheaper way to get ahead of competition than by implementing secure software development cycles.

Whenever the next big data breach hits, new laws and regulations will be developed and enforced. These new requirements will demand proof that security has been built into your application, software or equipment. Being able to prove that your software is secure will ensure a five-star rating, and customers will appreciate it as well.What is the downside of ignoring the additional work that is baking security into your product from the beginning? The answer is that security defects found later will cost significantly more to address.In the case of data protection and information security, negligence can damage a company’s reputation and value.

For instance, companies like Target, Sony, Anthem, and even the Office of Personnel Management, have been victims. The resulting devaluation of their shares are evident and well known.In its latest incident response report, SANS summarized the findings with the following: “These and other results of the 2015 survey show that incident response (IR) and even detection are maturing. For example, although malware is still the most common underlying reason for respondents’ reported incidents, 62% said malware caused their breaches, down from 82% in 2014. Data breaches also decreased to 39% from 63% last year. Such results hint that malware prevention and other security technologies are working in an increasingly complex threat landscape.”These results also indicate that companies have started to respond, and are fighting back in the online battlefield against malicious actors.

If your IT team is not prepared to respond, you will miss the boat. At this moment, you have a chance to be a leader. As the market landscape embraces security, the correct skill set can be hard to come by. Consider outsourcing your software development needs as a best practice to ensure that your applications have adequate levels of security built into them.

Published first here: Why Security Needs To Come First In Software Development: Part II | Shinetech Blog