How algorithms REALLY created that corporate nightmare at United Airlines – Gregory Bufithis

This is the thing BPM often ignores – to the chagrin of both customers and employees. In the algorithms of any given BPM-implementation, there is a need to be able to overrule the machine by individuals and these individuals need to be entitled and properly empowered to do so.

There are too many incidents in which an exception from the rule is called for but not catered for. Gregory Bufithis mentions this in his analysis of the case of the infamous UA airline incident:

Southwest Airlines and its “hub social team” that has spokes into every element of the business. Southwest actually has human decision making “escape valves” build into their algorithms used to dictate employee behavior. In other words, if an algorithmic process is going terribly wrong, it is ok for an employee to find a non-standard way to solve it or ask for help. Southwest has a “rapid response team” that can swoop in electronically (usually via a smart phone) to coach onsite employees on ways to response (and also authorizing extremely non-standard responses on the spot).

Source: How algorithms REALLY created that corporate nightmare at United Airlines – Gregory Bufithis

Excellent reading matter, and a wake-up call for all of us doing BPM implementations to follow Southwests policies and not UAs.

Shadowbrokers show how equation group allegedly gained access to Pakistan GSM network

“From what I understand they have tools to collect CDRs (Call detail record) that are generated on GSM core networks for billing purpose (who is calling who, etc.),” x0rz said. “They are deep into these systems.”

There are more than 1,000 files included in the dump and it’s unknown whether any of the vulnerabilities being exploited remain unpatched.

Read on here

Security flaws in Pentagon systems “easily” exploited by hackers

Hackers are likely exploiting the easy-to-find vulnerabilities, according to the security researcher who warned the Pentagon of the flaws months ago.

Source: Security flaws in Pentagon systems “easily” exploited by hackers | ZDNet

Not only for the Pentagon, it would be a great idea to install a suitable solution for Disaster Recovery (DR), just in case.

The Ransomware-Attack Has Hit Us! Now What?

Applying IT Disaster Recovery/Business Continuity

screen-shot-2017-01-10-at-17-24-31

During the development of Security Policies for an organization that was at the beginning of a journey towards safeguarding their own (and their customer’s) data, we were never asked the most obvious question: What if a disaster strikes and we have to go through this? Everyone was so concentrated on determining the input, that the obvious thought did not appear: What do we really do when disaster strikes in particular? All good and well having a great policy, but we need to get cracking at implementing technical solutions that help us act when the unthinkable really occurs. Based on the policy, strategies to act in accordance to the policies emerge. And sometimes, like in this case, new solutions to issues at hand are discovered.

example-sei

source: Carnegie Mellon[1]

 

 

A Business Continuity Plan or BCP is how an organization guards against future disasters that could endanger its long-term health or the accomplishment of its primary mission. The primary objective of a Disaster Recovery plan (a.k.a. Business Continuity plan) is the description of how an organization has to deal with potential natural or human-induced disasters. The disaster recovery plan steps that every enterprise incorporates as part of business management includes the guidelines and procedures to be undertaken to effectively respond to and recover from disaster recovery scenarios, which adversely impacts information systems and business operations. Plan steps that are well-constructed and implemented will enable organizations to minimize the effects of the disaster and resume mission-critical functions quickly.[2]

According to NIST, The DRP only addresses information system disruptions that require relocation[3]. (Source: NIST). For our short analysis, we will treat the two terms as meaning the same – it is not quite necessary (or possible) to invest in an alternative location like a container data-center in all cases.

Businesses should develop an IT disaster recovery plan. It begins by compiling an inventory of hardware (e.g. servers, desktops, laptops and wireless devices), software applications and data. Unfortunately, inventories pose an issue more often than not. Having a complete and up-to-date asset list is not supported in the way needed and desired for a disaster recovery plan. Most tools on the market support only a limited number of operating systems, and non-smart assets are a tedious manual workload. The toolset R&P offers underpinning our field-proven managed/military project office, is operating system agnostic and provides real-time information back on HW-assets, SW-assets (all operating systems), firmware-assets, BIOS/EFI, router/switch-configs, printer-queue/print-server configuration and by way of the newest addition, also provides access to firmware versions of displays and further non-smart assets.

From this inventory, it is fairly easy to identify critical software applications and data and the hardware required to run them. Using standardized hardware will help to replicate and re-image new hardware. Ensure that copies of program software are available to enable re-installation on replacement equipment. Prioritize hardware and software restoration[4]. (Source: HLS US)

 

Phases of building a BC- or DR Plan

 

Phase I – Data Collection

– the project should be organized with timeline, resources, and expected output

– the business impact analysis should be conducted at regular intervals

– a risk assessment should be conducted regularly

– Onsite and Offsite Backup and Recovery procedures should be reviewed regarding suitability and performance

– Alternate site locations (if any) must be selected and ready for use

 

Phase II – Plan Development and Testing

– development the Disaster Recovery Plan (DRP)

– Test the plan (regularly)

 

Phase III – Monitoring and Maintenance

– Maintenance of the plan by way of updates and regular reviews

– Periodic inspection or audit of DRP

– Documentation of any changes

 

There is – of course – need to introduce to staff any necessary information about the plans and train them on it, otherwise, staff cannot oblige to the rules once a critical situation hits.

 

Disaster Recovery Plan Criteria

A documentation of the procedures as to declaring emergency, evacuation of site pertaining to nature of disaster, active backup, notification of the related officials/DR team/staff, notification of procedures to be followed when disaster breaks out, alternate location specifications, should all be maintained. It is beneficial to be prepared in advance with sample DRPs and disaster recovery examples so that every individual in an organization are better educated on the basics. A workable business continuity planning template or scenario plans are available with most IT-based organizations to train employees with the procedures to be carried out in the event of a catastrophe occurring[5].

Recovery strategies should be developed for Information technology (IT) systems, applications and data. This includes networks, servers, desktops, laptops, wireless devices, data and connectivity. Priorities for IT recovery should be consistent with the priorities for recovery of business functions and processes[6]. (Source: HLS US)

Downtime can be identified in several ways[7] (Source NIST):

screen-shot-2017-01-10-at-12-59-46

 

Cost-Benefit

The longer a disruption is allowed to continue, the more costly it can become to the organization and its operations. Conversely, the shorter the return time to operations, the more expensive the recovery solutions cost to implement[8].

cost-balance

 

 

 

 

(Note that R&P excel in cost-reduction of systems recovery)

IT Recovery Strategies

Information technology systems require hardware, software, data and connectivity. Without one component of the “system,” the system may not run. Therefore, recovery strategies should be developed to anticipate the loss of one or more of the following system components:

– Computer room environment (secure computer room with climate control, conditioned and backup power supply, etc.)

– Hardware (networks, servers, desktop and laptop computers, wireless devices and peripherals)

– Connectivity to a service provider (fiber, cable, wireless, etc.)

– Software applications (electronic data interchange, electronic mail, enterprise resource management, office productivity, etc.)

– Data and restoration[9] (Source: HLS US)

 

Impact Analysis

The impact analysis should identify the operational and financial impacts resulting from the disruption of business functions and processes. Impacts to consider include:

  • Lost sales and income
  • Delayed sales or income
  • Increased expenses (e.g., overtime labor, outsourcing, expediting costs, etc.)
  • Regulatory fines
  • Contractual penalties or loss of contractual bonuses
  • Customer dissatisfaction or defection
  • Delay of new business plans

in case of corporate businesses and in similar ways for public services.[10]

 

Testing and Maintenance

The dates of testing, the disaster recovery scenarios, and plans for each scenario should be documented. Maintenance involves records of scheduled review on a daily, weekly, monthly, quarterly, yearly basis; reviews of plans, teams, activities, tasks accomplished and complete documentation review and update.

 

In case of an incident

These are the recommended three steps in case any incident happens, be it a hacking attack or other malevolent cyber-incidents (e.g. ransomware hitting the organization), malfunctioning software- or operating-system-updates or faulty firmware, BIOS or software patches:

– Identification

– Containment

– Eradication (A good example of actions performed during the eradication phase would be using the R&P-provided toolset which allows for an individual recovery of each complete system end-to-end). Professional services close the attack-vectors, but at this point, it is of the essence not to lose time with forensic or analytical work. If necessary, the R&P tools may perform cloning of affected systems for analytic use later.

– Recovery (bring affected systems back into the production environment carefully, as to insure that it will not lead another incident. It is essential to test, monitor, and validate the systems that are being put back into production to verify that they are not being re-infected by malware or compromised by some other means.)

– Lessons Learnt[11] (well, this is the task of documentation everyone hates, but it is essential for future reference)

 

Checklist

This checklist helps to make sure all boxes are ticked in case the incident hits you:

– Stop the attack in progress.

– Cut off the attack vector.

– Assemble the response team.

– Isolate affected instances.

– Identify timeline of attack.

– Identify compromised data.

– Assess risk to other systems.

– Assess risk of re-attack.

– Apply additional mitigations, additions to monitoring, etc.

– Forensic analysis of compromised systems.

– Internal communication.

– Involve law enforcement (if you are not law enforcement yourselves).

– Reach out to external parties that may have been used as vector for attack.

– External communication.

 

Getting rid of assumptions as a winning strategy

Summarizing, here are the five major points to consider:

  1. Repetitive probing and repeated tests of IT security will deliver facts and figures vs. a false feeling of safety
  2. Generally speaking, the lead time to recovery of any of your configurable items (CI) is the best possible recovery time. Any company can be out of business quick, if incapable of returning to an operational state. If Deutsche Bank is not operational one day, it is their doomsday. Security tests will deliver unpleasant facts on IT –assets formerly deemed safe. Take 20 minutes to return to normal as a goal.
  3. Companies lose customers due to vanished trust in their capabilities (e.g. repeated outages or ability to adapt. Public services sometimes have even more critical usage and depend on minutes. Using experiences from R&P public sector/HLS experiences is not a bad idea.
  4. The shortcut in implementing disaster recovery is to implement a proper DR capability already in the early planning phase.
  5. The second best strategy is not to lose time over reviewing existing IT infrastructure and enhance it by applying the R&P MPO-toolset.

 

 

rup-contactRoth & Partners have significant experience in the above 5 topics and the capability to support IT experts globally in their challenge to enhance IT security systems. Your advantage: Give us a bell at one of our centers or write a mail:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sources:

[1] http://resources.sei.cmu.edu/asset_files/TechnicalReport/2004_005_001_14405.pdf

[2] http://www.disasterrecovery.org/

[3] http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-34r1.pdf

[4] https://www.ready.gov/business/implementation/IT

[5] http://www.disasterrecovery.org/plan_steps.html

[6] https://www.ready.gov/business/implementation/IT

[7] http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-34r1.pdf

[8] http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-34r1.pdf

[9] https://www.ready.gov/business/implementation/IT

[10] https://www.ready.gov/business-impact-analysis

[11] https://www.sans.org/reading-room/whitepapers/incident/incident-handlers-handbook-33901

 

Microsoft SCCM as the ride a malicious hitchhiker would love

sccm-vUsing a Microsoft-Tool for Software- and patch-distribution can’t be wrong, can it? I have tried a while to come up with a couple of good reasons to use SCCM, but after this mental exercise I just feel less inclined to recommend it.

The Redmond company has invested many a Dollar in making randomness part of our operating systems. As a test, start off with three computers, same build and model, identically configured and install the OS on them at the same time, only to see that the automated (non SCCM-driven) update-process for each of them takes a different turn – a wonderful proof of the randomness in Microsoft enhancing our dull lives with the IT equivalent of a one-way, dead-end street ever so often.

The same company brings us a wonderful patch- and release-management tool:

SCCM is a platform that allows for an enterprise to package and deploy operating systems, software, and software updates. It allows for IT staff to script and push out installations to clients in an automated manner. You can kind of think of it as a customizable, scriptable WSUS.” -Matt Nelson, Veris Group

SCCM operates in a “Client-Server” architecture by installing an agent on workstations/servers that will check in periodically to its assigned site server for updates. On the backend, SCCM uses a SQL database and WMI classes to store and access various types of information. (Source)

In a perfect world, such an institution would increase the health, security and availability of our systems.

As it turns out, the world we live in is not exactly what we can call perfect. Not even close. This is, where the randomness comes as a bit of an additional hindrance to arrive at the level of health, security and availability we desire for our systems – especially in a case, where we are not dealing with a red team in a Red Team engagement, but with real organization data predators. We do replace the update randomness by installing an even more worrisome vulnerability.

In short this is the risk: “If you can gain access to SCCM, it makes for a great attack platform. It heavily integrates Windows PowerShell, has excellent network visibility, and has a number of SCCM clients as SYSTEM just waiting to execute your code as SYSTEM.”

Think about what a malicious actor in an IT-service organization could do after getting access to SCCM. Think about inserting payloads. Check out PowerShellEmpire  for more information.

For any attacker that has gained domain admin to a network, using any centralized administration software that is part of the Microsoft universe and is actively participating in it, is like finding the golden pot at the end of the rainbow. An attacker can find nodes and map out the network, see where users spend most of their time and find critical assets.

So it didn’t take long before someone came up with some smart ideas on how to deploy payloads and manipulate the assets by using SCCM. In order to interact with SCCM, you need to have administrative rights on the SCCM server, however you do not need to have domain administrative rights on the network. SCCM is a cornerstone for an attacker to stay under the radar in a Red Team situation, e.g. to persist in the network after elevated access is achieved. PowerSCCM is one of these nifty developments Red Team members came up with (in this case, check out enigma0x3 – you might actually find more than you expected).

It takes a few steps to deploy malicious packages/scripts to clients through SCCM, and offensive manipulation/deployment is only currently supported through WMI SCCM sessions. SCCM deployments need three parts- a user/device collection of targets, a malicious application to deploy, and a deployment that binds the two together.

As a countermeasure, the easiest one is avoiding SCCM in critical environments completely and instead opt for a tool that does not participate in the wonderful, rainbow-colored world of Microsoft for patch-, release and disaster-recovery. By doing so, you might find out, that you are at the same time saving quite a bit of project-management time and cost.

Reducing cost of patch management

Both in Security as in the “business as usual” operations, patch management is a financial draught. OpEx in IT has many hidden slings and arrows, and patch-management (and its bigger sibling, release-management) is one of them. Any time (and this is often, nowadays!) a vulnerability is detected in any of the distributed pieces of software, it is recommended to roll out the patches as quick as possible.

In a private environment, following the simple instructions is a piece of cake: automatic detection of patches while connected to the internet will trigger an unobserved or accepted download followed by a few clicks to install and relaunch). Not so in regulated, secured or simply sizeable environments, in which a controlled patch-management may cause many difficulties in the organization, due to interdependencies of security, network and other issues. If, for example, a hospital or a military organization gets loads of blue-screens due to any incompatibility in the above mentioned areas, employees will suffer, valuable time is lost, responsiveness is removed and overall opportunity costs sky-rocket. These damages are hardly calculated, but very well felt by the organization and the budget which is drained by unallocated costs.

Taking input from here, we may follow the calculation as this:

“The cost of any administrative process to a business consists of the following components:

  • The human resource cost (unit time cost of employees committed to the process multiplied by their number) – H
  • The frequency of the process (how often it is executed) – f
  • The time required to execute the process (how long does it take to fully complete all tasks associated with the process, including dealing with failures and retries) – T
  • The scope of the process (how many people/applications/systems are impacted by the process) – S
  • The lost opportunity cost (a reflection of the value generated if the resources consumed and impacted by this administrative process was reallocated to a service) – O

Reducing any of these components will result in a reduction of the total cost of the process and our simplistic model can be expressed mathematically as:

Total Cost = (f x H x T x S) + O

Our model is far from precise however it is a suitable starting place for identifying and quantifying ways of reducing the costs of administrative processes. For example let’s look at how CTI impacts the cost of vulnerability and patch management, as we have noted it can reduce the frequency, time and scope of the process (we’ll tackle lost opportunity later), however it could potentially increase the human resource costs, at least in the short term, as we’ll need to hire and assemble a CTI team. Overall we can clearly see the benefits CTI based on this simple analysis alone.”

 

Opportunity cost may push the level even higher, and it may be different in varying sectors and organizations:

“At a minimum we could consider the lost opportunity cost as the amount that would have been gained by the business from putting the capital allocated to the administrative process into a savings account. Under free market conditions this would normally result in a gain in line with rate of interest but for the sake of argument lets call it 5%. So we can say the absolute minimum cost of an administrative process will be:

Total Cost = (f x H x T x S) * 1.05

However this is, in my opinion, a highly conservative estimate of the lost opportunity cost.”

Roth & Partners disposes of a tool, generated by a partner with many years of experience in the area and will be able to apply its valuable details to your organization on demand. There are also excellent tools to underpin a “managed program office” (as opposed to the classical PMO-approach that does not use the military mission planning R&P applies).

 

Some benefits regarding the patch- and release management are summarized in the following figure:

screen-shot-2016-09-07-at-11-26-39

Benefits from using a sensible patch- and release management from a practical example is depicted in the graph below;  The resulting increase in safety, stability, reduction of efforts and costs needed is in line with the above mentioned formula and saved costs can exceed expectation in OpEx.

cost-roolout

 

 

 

 

 

 

 

E-Commerce Migration? Get ready for a headache!

Migrating your shop to a different platform? Get ready for headaches.

The net is full of complaints. Always, regarding any topic, but when it comes to e-Commerce, there is much more irritation than about any other topic, at least so it seems. The reason is that there is never a full comparison to be found (exceptions do exist, such as http://www.ecomparo.de in Germany, but not completely for altruistic reasons, as they are playing a systems integrator role).

In this overview, we will look at the issues, pitfalls and concerns when migrating an existing e-Commerce solution to a different platform.

First, let’s have a look at the options to choose from. Selection criteria could be, just to name a few, Open Source versus commercial tools, sizing, existing application landscapes may limit or predefine some decisions (e.g. order management, ERP, fulfillment- or payment-solutions). Your choice of geography requires capabilities in language-localization, taking care of the legal-, tax, and licensing-situation (e.g. in China, entering into the B2C market requires a license which is obtained from local authorities). Then the targeted market determines your choice as  omnichannel-, B2B, B2C, B2B2C, or, again in China, your online to offline capabilities matter. It is interesting to note, that the China domestic market has turned digital all the way, leapfrogging quite a number of steps which Europe and the US went through (and are still enduring teething problems with).

Typical issues in different locations are VAT rules and regulations, information that are required by local laws, such as delivery times, information regarding shipment cost, information on prices (including/excluding sales taxes or VAT), T&C which can fundamentally vary for different locales, and support for the laws in the chosen geography may cause extra efforts or even rule out some solutions.

If you wish to support omnichannel, omni-currency and multi-location, you will find yourself growing out of most community editions, and depending on what your market looks like in terms of size, you will need a professionally maintained and supported platform – either as a SaaS or which you host yourself. In such a situation, Magento EE or Intershop may appear to be the systems of choice.

The topmost (both in capabilities and cost, and unfortunately also in implementation times) players are systems which offer (just to name a few items) deep analytic capabilities, which could be called data mining solutions of their own right, and offer price optimization, dynamic pricing across the board, promotion and campaign optimization on a global scale. Complex implementations of loyalty programs, integrated with those offered by complementary providers, or multi-tenant / multi brand implementations. If this describes your requirements, you have arrived in the sweet spot of SAP Hybris, Intershop, Oracle ATG and IBM eCommerce solutions.

On the other end of the scale are an increasing number of flavors to savour:

  • XT:c
  • Oxid
  • woocommerce
  • spreecommerce (Ruby on Rails)
  • bigcartel
  • Squarespace
  • Tictail
  • Shopify
  • Supa Dupa
  • Airsquare
  • Enstore
  • Goodsie
  • Magento Go
  • Storenvy
  • Flying Cart
  • Virb
  • Jimdo (Germany, very small shops only)

and many more. The issue is, sometimes you might hit on something that is an awesome solution that appeals to you and while going at it, you realize it is a one-developer piece of software!

If you don’t intend to spend too much time on any IT-related issues, you might wish to slip into the open spaces that TMall/Taobao (China) or Amazon (elsewhere) offers, but your branding capabilities are fairly limited. You might find the management of stock and prices to be limited, but on the other hand, the stocking, fulfillment, shipping and payment is taken care of easily.

There are more hosted solutions, which do give you a limited choice of individual design and branding, such as Zoho (https://www.zoho.com) and comparable solutions.

Looking at the necessary steps that a migration calls for, we find a number of recurring issues which are annoying when not planned, but can be easily managed.

First of all, during a discovery phase, identify all the requirements that will have to be covered by the new system. Most important issues are the catalogue structure, the scope of data to be migrated and order fulfilment. Now would be also a good moment, early in the process, to take care of Domain Name Issues before doing anything else. This requires a bit of thinking, because the process should be as seamless as possible for returning customers and also for the search engines. At some point in time, the search engines should point at your new domain (which could be the old domain, but then you have to plan accordingly on how to do and when to do the switch. It is recommended to have a time of parallel operation. Mind, that there might be issues with the security certificate, which allows you customers to shop with an SSL protection, needs to be in line with the sites. If your domain name changes, this is not a big deal. Letsencrypt (https://letsencrypt.org) offers them free, rather than in the earlier days, when you had to pay a kingdom and a half for an SSL-certificate. Commercial certs are certainly still available.

As a second step, check how does the new platform integrate with your existing business processes and application landscape you have in place? If you use third party vendors to provide your site with essential services, make sure they support your new platform in the same (or better) way than your previous one, and if not, replace them. A sensible design document helps you to cover these issues – or ask your systems integrator to provide you with a design document.

Moving the Data is the challenge number 3, which causes the most headaches. We find it often makes more sense to take a new look at things: It might be the numbering schemes in SKU, invoices which are not in line with expectations. Especially complex combined products with options might end up as a myriad of single products instead of a nicely sorted product scheme with options to choose from.

This could indicate the need to manually correct the items. Have prices stayed correct? Sometimes rounding errors cause changes in noted prices.

In those territories that require different sales taxes for different classes of goods (e.g. Switzerland and Germany) these need some checking of the flags correctly set. Indeed, the different taxes can cost a lot of time to correct, so better look twice!

Graphical assets and the sizes need to be quality assured so that the customers will not be confronted with tiny or huge assets.

By all means, avoid getting any “404”s by checking the URLs in time. If your site has had good organic traffic before, make sure that the page URLs generated by the new platform is the same as before if you can, keep the page descriptions identical and where necessary, install permanent redirects (301 and 302 redirects). Prepare yopur SEO strategy thoroughly! Many stores experience a temporary drop in organic traffic following their migration. This is usually the result of lost meta information and changes in URLs. Prepare your post-migration SEO strategy to minimize these temporary effects, if necessary, employ third parties for this task.

Also make sure that the emails keep coming in regardless of which system you use. Testing email flow should be a separate QA activity, as it can cause a lot of frustration if emails get lost.

Speaking of Quality assurance, this is the most important step to take. All business transactions need to be thoroughly checked before the site replaces the current one. There is nothing stopping you to have the old site up for about a month in parallel, once all business transactions have been tested straight through. At some point in time, it would definitely make sense to have a third party assess the site about all vulnerabilities that a good penetration tester can find (rinse, repeat often!). In most e-Commerce, wherever credit cards are accepted, the PCI-DSS standard needs to be fulfilled in order to avoid hefty fines and risks later. Make sure your new platform handles this issue and is compliant. PCI-DSS standard is available now in V3.

Last step is to do the deployment, the cut-over is done best in the wee hours of the day.

After a while it might make sense to check, whether the performance of the site is in line with expectation. Often, slow sites do not come from Performance-issues in the hosting or the network, but from heavy use of additional scripts, bells and whistles. This is a nice performance test, allowing to simulate a geography to access the service from: http://websitetest.com/

Some vendors offer automated tools for migration to their platform:

https://www.magentocommerce.com/magento-connect/oscommerce-migration-tool.html and

https://www.magentocommerce.com/magento-connect/cart2cart-oxid-eshop-to-magento-migration.html

http://store.shopware.com/swag00426/migration-von-anderen-shopsystemen-zu-shopware.html

are examples.

http://Shinetech.de

In a recent customer, we asked for some feedback on the migration process and found the following points to be of interest:

The customers legacy platform was rigid and did not support a lot of the state-of the art e-Commerce features. They also found a licensing issue that would have caused a budgetary change.

As the online e-Commerce presence was of central importance, they defined a phased strategy in the migration process from legacy to new:

– Build up a framework that covers around 70% functions and features of legacy and launched the store internally to collection feedbacks

– Improve the store and gradually route traffic from legacy to the new store

– Retire the legacy store and continuously improve the new store

The global retailer chose an “Agile” development method – this enabled them to implement the strategy mentioned above in a number of suitable sprints. And it proved being an efficient method as we can convert a still roughly defined idea into a prototype quickly, then fine-tune and improve it. This method dramatically shortened the TAT of feature delivery. In fact, it happened several times, that required features were delivered earlier than scheduled.

Choosing a strong and capable team with “can do” attitude was another key success-factor to ensure the project can be delivered on time with high quality. Shinetech was chosen and is proud about this project, the trust of the customer and the project success

Strong support from management gave the project team guidance and necessary support, to ensure no deviation to meet business goals. Being a global organization, communications among different levels and stakeholders were key, seamless and timely communication by all means (emails, meetings, demos, newsletter etc.) ensured a smooth project implementation without surprises.

Last but not the least, the chosen platform proved to be a flexible and robust software, that eased development complexity and enables future scalability along with business growth.

The management buy-in is central in e-Commerce migration projects and in our weekly management meetings communication was open; critical and helpful feedback was given at all times and by stakeholders all across Asia. As a result, the system implementation was fast. The migration project finished after 4 months, with system stability being extremely good. Did not face any critical issue since launch. Implementation cost and maintenance cost is reasonable.

This article has been published by my partner Shinetech here. Please give the site a visit!