The Ransomware-Attack Has Hit Us! Now What?

Applying IT Disaster Recovery/Business Continuity

screen-shot-2017-01-10-at-17-24-31

During the development of Security Policies for an organization that was at the beginning of a journey towards safeguarding their own (and their customer’s) data, we were never asked the most obvious question: What if a disaster strikes and we have to go through this? Everyone was so concentrated on determining the input, that the obvious thought did not appear: What do we really do when disaster strikes in particular? All good and well having a great policy, but we need to get cracking at implementing technical solutions that help us act when the unthinkable really occurs. Based on the policy, strategies to act in accordance to the policies emerge. And sometimes, like in this case, new solutions to issues at hand are discovered.

example-sei

source: Carnegie Mellon[1]

 

 

A Business Continuity Plan or BCP is how an organization guards against future disasters that could endanger its long-term health or the accomplishment of its primary mission. The primary objective of a Disaster Recovery plan (a.k.a. Business Continuity plan) is the description of how an organization has to deal with potential natural or human-induced disasters. The disaster recovery plan steps that every enterprise incorporates as part of business management includes the guidelines and procedures to be undertaken to effectively respond to and recover from disaster recovery scenarios, which adversely impacts information systems and business operations. Plan steps that are well-constructed and implemented will enable organizations to minimize the effects of the disaster and resume mission-critical functions quickly.[2]

According to NIST, The DRP only addresses information system disruptions that require relocation[3]. (Source: NIST). For our short analysis, we will treat the two terms as meaning the same – it is not quite necessary (or possible) to invest in an alternative location like a container data-center in all cases.

Businesses should develop an IT disaster recovery plan. It begins by compiling an inventory of hardware (e.g. servers, desktops, laptops and wireless devices), software applications and data. Unfortunately, inventories pose an issue more often than not. Having a complete and up-to-date asset list is not supported in the way needed and desired for a disaster recovery plan. Most tools on the market support only a limited number of operating systems, and non-smart assets are a tedious manual workload. The toolset R&P offers underpinning our field-proven managed/military project office, is operating system agnostic and provides real-time information back on HW-assets, SW-assets (all operating systems), firmware-assets, BIOS/EFI, router/switch-configs, printer-queue/print-server configuration and by way of the newest addition, also provides access to firmware versions of displays and further non-smart assets.

From this inventory, it is fairly easy to identify critical software applications and data and the hardware required to run them. Using standardized hardware will help to replicate and re-image new hardware. Ensure that copies of program software are available to enable re-installation on replacement equipment. Prioritize hardware and software restoration[4]. (Source: HLS US)

 

Phases of building a BC- or DR Plan

 

Phase I – Data Collection

– the project should be organized with timeline, resources, and expected output

– the business impact analysis should be conducted at regular intervals

– a risk assessment should be conducted regularly

– Onsite and Offsite Backup and Recovery procedures should be reviewed regarding suitability and performance

– Alternate site locations (if any) must be selected and ready for use

 

Phase II – Plan Development and Testing

– development the Disaster Recovery Plan (DRP)

– Test the plan (regularly)

 

Phase III – Monitoring and Maintenance

– Maintenance of the plan by way of updates and regular reviews

– Periodic inspection or audit of DRP

– Documentation of any changes

 

There is – of course – need to introduce to staff any necessary information about the plans and train them on it, otherwise, staff cannot oblige to the rules once a critical situation hits.

 

Disaster Recovery Plan Criteria

A documentation of the procedures as to declaring emergency, evacuation of site pertaining to nature of disaster, active backup, notification of the related officials/DR team/staff, notification of procedures to be followed when disaster breaks out, alternate location specifications, should all be maintained. It is beneficial to be prepared in advance with sample DRPs and disaster recovery examples so that every individual in an organization are better educated on the basics. A workable business continuity planning template or scenario plans are available with most IT-based organizations to train employees with the procedures to be carried out in the event of a catastrophe occurring[5].

Recovery strategies should be developed for Information technology (IT) systems, applications and data. This includes networks, servers, desktops, laptops, wireless devices, data and connectivity. Priorities for IT recovery should be consistent with the priorities for recovery of business functions and processes[6]. (Source: HLS US)

Downtime can be identified in several ways[7] (Source NIST):

screen-shot-2017-01-10-at-12-59-46

 

Cost-Benefit

The longer a disruption is allowed to continue, the more costly it can become to the organization and its operations. Conversely, the shorter the return time to operations, the more expensive the recovery solutions cost to implement[8].

cost-balance

 

 

 

 

(Note that R&P excel in cost-reduction of systems recovery)

IT Recovery Strategies

Information technology systems require hardware, software, data and connectivity. Without one component of the “system,” the system may not run. Therefore, recovery strategies should be developed to anticipate the loss of one or more of the following system components:

– Computer room environment (secure computer room with climate control, conditioned and backup power supply, etc.)

– Hardware (networks, servers, desktop and laptop computers, wireless devices and peripherals)

– Connectivity to a service provider (fiber, cable, wireless, etc.)

– Software applications (electronic data interchange, electronic mail, enterprise resource management, office productivity, etc.)

– Data and restoration[9] (Source: HLS US)

 

Impact Analysis

The impact analysis should identify the operational and financial impacts resulting from the disruption of business functions and processes. Impacts to consider include:

  • Lost sales and income
  • Delayed sales or income
  • Increased expenses (e.g., overtime labor, outsourcing, expediting costs, etc.)
  • Regulatory fines
  • Contractual penalties or loss of contractual bonuses
  • Customer dissatisfaction or defection
  • Delay of new business plans

in case of corporate businesses and in similar ways for public services.[10]

 

Testing and Maintenance

The dates of testing, the disaster recovery scenarios, and plans for each scenario should be documented. Maintenance involves records of scheduled review on a daily, weekly, monthly, quarterly, yearly basis; reviews of plans, teams, activities, tasks accomplished and complete documentation review and update.

 

In case of an incident

These are the recommended three steps in case any incident happens, be it a hacking attack or other malevolent cyber-incidents (e.g. ransomware hitting the organization), malfunctioning software- or operating-system-updates or faulty firmware, BIOS or software patches:

– Identification

– Containment

– Eradication (A good example of actions performed during the eradication phase would be using the R&P-provided toolset which allows for an individual recovery of each complete system end-to-end). Professional services close the attack-vectors, but at this point, it is of the essence not to lose time with forensic or analytical work. If necessary, the R&P tools may perform cloning of affected systems for analytic use later.

– Recovery (bring affected systems back into the production environment carefully, as to insure that it will not lead another incident. It is essential to test, monitor, and validate the systems that are being put back into production to verify that they are not being re-infected by malware or compromised by some other means.)

– Lessons Learnt[11] (well, this is the task of documentation everyone hates, but it is essential for future reference)

 

Checklist

This checklist helps to make sure all boxes are ticked in case the incident hits you:

– Stop the attack in progress.

– Cut off the attack vector.

– Assemble the response team.

– Isolate affected instances.

– Identify timeline of attack.

– Identify compromised data.

– Assess risk to other systems.

– Assess risk of re-attack.

– Apply additional mitigations, additions to monitoring, etc.

– Forensic analysis of compromised systems.

– Internal communication.

– Involve law enforcement (if you are not law enforcement yourselves).

– Reach out to external parties that may have been used as vector for attack.

– External communication.

 

Getting rid of assumptions as a winning strategy

Summarizing, here are the five major points to consider:

  1. Repetitive probing and repeated tests of IT security will deliver facts and figures vs. a false feeling of safety
  2. Generally speaking, the lead time to recovery of any of your configurable items (CI) is the best possible recovery time. Any company can be out of business quick, if incapable of returning to an operational state. If Deutsche Bank is not operational one day, it is their doomsday. Security tests will deliver unpleasant facts on IT –assets formerly deemed safe. Take 20 minutes to return to normal as a goal.
  3. Companies lose customers due to vanished trust in their capabilities (e.g. repeated outages or ability to adapt. Public services sometimes have even more critical usage and depend on minutes. Using experiences from R&P public sector/HLS experiences is not a bad idea.
  4. The shortcut in implementing disaster recovery is to implement a proper DR capability already in the early planning phase.
  5. The second best strategy is not to lose time over reviewing existing IT infrastructure and enhance it by applying the R&P MPO-toolset.

 

 

rup-contactRoth & Partners have significant experience in the above 5 topics and the capability to support IT experts globally in their challenge to enhance IT security systems. Your advantage: Give us a bell at one of our centers or write a mail:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sources:

[1] http://resources.sei.cmu.edu/asset_files/TechnicalReport/2004_005_001_14405.pdf

[2] http://www.disasterrecovery.org/

[3] http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-34r1.pdf

[4] https://www.ready.gov/business/implementation/IT

[5] http://www.disasterrecovery.org/plan_steps.html

[6] https://www.ready.gov/business/implementation/IT

[7] http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-34r1.pdf

[8] http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-34r1.pdf

[9] https://www.ready.gov/business/implementation/IT

[10] https://www.ready.gov/business-impact-analysis

[11] https://www.sans.org/reading-room/whitepapers/incident/incident-handlers-handbook-33901

 

Advertisements

Microsoft SCCM as the ride a malicious hitchhiker would love

sccm-vUsing a Microsoft-Tool for Software- and patch-distribution can’t be wrong, can it? I have tried a while to come up with a couple of good reasons to use SCCM, but after this mental exercise I just feel less inclined to recommend it.

The Redmond company has invested many a Dollar in making randomness part of our operating systems. As a test, start off with three computers, same build and model, identically configured and install the OS on them at the same time, only to see that the automated (non SCCM-driven) update-process for each of them takes a different turn – a wonderful proof of the randomness in Microsoft enhancing our dull lives with the IT equivalent of a one-way, dead-end street ever so often.

The same company brings us a wonderful patch- and release-management tool:

SCCM is a platform that allows for an enterprise to package and deploy operating systems, software, and software updates. It allows for IT staff to script and push out installations to clients in an automated manner. You can kind of think of it as a customizable, scriptable WSUS.” -Matt Nelson, Veris Group

SCCM operates in a “Client-Server” architecture by installing an agent on workstations/servers that will check in periodically to its assigned site server for updates. On the backend, SCCM uses a SQL database and WMI classes to store and access various types of information. (Source)

In a perfect world, such an institution would increase the health, security and availability of our systems.

As it turns out, the world we live in is not exactly what we can call perfect. Not even close. This is, where the randomness comes as a bit of an additional hindrance to arrive at the level of health, security and availability we desire for our systems – especially in a case, where we are not dealing with a red team in a Red Team engagement, but with real organization data predators. We do replace the update randomness by installing an even more worrisome vulnerability.

In short this is the risk: “If you can gain access to SCCM, it makes for a great attack platform. It heavily integrates Windows PowerShell, has excellent network visibility, and has a number of SCCM clients as SYSTEM just waiting to execute your code as SYSTEM.”

Think about what a malicious actor in an IT-service organization could do after getting access to SCCM. Think about inserting payloads. Check out PowerShellEmpire  for more information.

For any attacker that has gained domain admin to a network, using any centralized administration software that is part of the Microsoft universe and is actively participating in it, is like finding the golden pot at the end of the rainbow. An attacker can find nodes and map out the network, see where users spend most of their time and find critical assets.

So it didn’t take long before someone came up with some smart ideas on how to deploy payloads and manipulate the assets by using SCCM. In order to interact with SCCM, you need to have administrative rights on the SCCM server, however you do not need to have domain administrative rights on the network. SCCM is a cornerstone for an attacker to stay under the radar in a Red Team situation, e.g. to persist in the network after elevated access is achieved. PowerSCCM is one of these nifty developments Red Team members came up with (in this case, check out enigma0x3 – you might actually find more than you expected).

It takes a few steps to deploy malicious packages/scripts to clients through SCCM, and offensive manipulation/deployment is only currently supported through WMI SCCM sessions. SCCM deployments need three parts- a user/device collection of targets, a malicious application to deploy, and a deployment that binds the two together.

As a countermeasure, the easiest one is avoiding SCCM in critical environments completely and instead opt for a tool that does not participate in the wonderful, rainbow-colored world of Microsoft for patch-, release and disaster-recovery. By doing so, you might find out, that you are at the same time saving quite a bit of project-management time and cost.

E-Commerce Migration? Get ready for a headache!

Migrating your shop to a different platform? Get ready for headaches.

The net is full of complaints. Always, regarding any topic, but when it comes to e-Commerce, there is much more irritation than about any other topic, at least so it seems. The reason is that there is never a full comparison to be found (exceptions do exist, such as http://www.ecomparo.de in Germany, but not completely for altruistic reasons, as they are playing a systems integrator role).

In this overview, we will look at the issues, pitfalls and concerns when migrating an existing e-Commerce solution to a different platform.

First, let’s have a look at the options to choose from. Selection criteria could be, just to name a few, Open Source versus commercial tools, sizing, existing application landscapes may limit or predefine some decisions (e.g. order management, ERP, fulfillment- or payment-solutions). Your choice of geography requires capabilities in language-localization, taking care of the legal-, tax, and licensing-situation (e.g. in China, entering into the B2C market requires a license which is obtained from local authorities). Then the targeted market determines your choice as  omnichannel-, B2B, B2C, B2B2C, or, again in China, your online to offline capabilities matter. It is interesting to note, that the China domestic market has turned digital all the way, leapfrogging quite a number of steps which Europe and the US went through (and are still enduring teething problems with).

Typical issues in different locations are VAT rules and regulations, information that are required by local laws, such as delivery times, information regarding shipment cost, information on prices (including/excluding sales taxes or VAT), T&C which can fundamentally vary for different locales, and support for the laws in the chosen geography may cause extra efforts or even rule out some solutions.

If you wish to support omnichannel, omni-currency and multi-location, you will find yourself growing out of most community editions, and depending on what your market looks like in terms of size, you will need a professionally maintained and supported platform – either as a SaaS or which you host yourself. In such a situation, Magento EE or Intershop may appear to be the systems of choice.

The topmost (both in capabilities and cost, and unfortunately also in implementation times) players are systems which offer (just to name a few items) deep analytic capabilities, which could be called data mining solutions of their own right, and offer price optimization, dynamic pricing across the board, promotion and campaign optimization on a global scale. Complex implementations of loyalty programs, integrated with those offered by complementary providers, or multi-tenant / multi brand implementations. If this describes your requirements, you have arrived in the sweet spot of SAP Hybris, Intershop, Oracle ATG and IBM eCommerce solutions.

On the other end of the scale are an increasing number of flavors to savour:

  • XT:c
  • Oxid
  • woocommerce
  • spreecommerce (Ruby on Rails)
  • bigcartel
  • Squarespace
  • Tictail
  • Shopify
  • Supa Dupa
  • Airsquare
  • Enstore
  • Goodsie
  • Magento Go
  • Storenvy
  • Flying Cart
  • Virb
  • Jimdo (Germany, very small shops only)

and many more. The issue is, sometimes you might hit on something that is an awesome solution that appeals to you and while going at it, you realize it is a one-developer piece of software!

If you don’t intend to spend too much time on any IT-related issues, you might wish to slip into the open spaces that TMall/Taobao (China) or Amazon (elsewhere) offers, but your branding capabilities are fairly limited. You might find the management of stock and prices to be limited, but on the other hand, the stocking, fulfillment, shipping and payment is taken care of easily.

There are more hosted solutions, which do give you a limited choice of individual design and branding, such as Zoho (https://www.zoho.com) and comparable solutions.

Looking at the necessary steps that a migration calls for, we find a number of recurring issues which are annoying when not planned, but can be easily managed.

First of all, during a discovery phase, identify all the requirements that will have to be covered by the new system. Most important issues are the catalogue structure, the scope of data to be migrated and order fulfilment. Now would be also a good moment, early in the process, to take care of Domain Name Issues before doing anything else. This requires a bit of thinking, because the process should be as seamless as possible for returning customers and also for the search engines. At some point in time, the search engines should point at your new domain (which could be the old domain, but then you have to plan accordingly on how to do and when to do the switch. It is recommended to have a time of parallel operation. Mind, that there might be issues with the security certificate, which allows you customers to shop with an SSL protection, needs to be in line with the sites. If your domain name changes, this is not a big deal. Letsencrypt (https://letsencrypt.org) offers them free, rather than in the earlier days, when you had to pay a kingdom and a half for an SSL-certificate. Commercial certs are certainly still available.

As a second step, check how does the new platform integrate with your existing business processes and application landscape you have in place? If you use third party vendors to provide your site with essential services, make sure they support your new platform in the same (or better) way than your previous one, and if not, replace them. A sensible design document helps you to cover these issues – or ask your systems integrator to provide you with a design document.

Moving the Data is the challenge number 3, which causes the most headaches. We find it often makes more sense to take a new look at things: It might be the numbering schemes in SKU, invoices which are not in line with expectations. Especially complex combined products with options might end up as a myriad of single products instead of a nicely sorted product scheme with options to choose from.

This could indicate the need to manually correct the items. Have prices stayed correct? Sometimes rounding errors cause changes in noted prices.

In those territories that require different sales taxes for different classes of goods (e.g. Switzerland and Germany) these need some checking of the flags correctly set. Indeed, the different taxes can cost a lot of time to correct, so better look twice!

Graphical assets and the sizes need to be quality assured so that the customers will not be confronted with tiny or huge assets.

By all means, avoid getting any “404”s by checking the URLs in time. If your site has had good organic traffic before, make sure that the page URLs generated by the new platform is the same as before if you can, keep the page descriptions identical and where necessary, install permanent redirects (301 and 302 redirects). Prepare yopur SEO strategy thoroughly! Many stores experience a temporary drop in organic traffic following their migration. This is usually the result of lost meta information and changes in URLs. Prepare your post-migration SEO strategy to minimize these temporary effects, if necessary, employ third parties for this task.

Also make sure that the emails keep coming in regardless of which system you use. Testing email flow should be a separate QA activity, as it can cause a lot of frustration if emails get lost.

Speaking of Quality assurance, this is the most important step to take. All business transactions need to be thoroughly checked before the site replaces the current one. There is nothing stopping you to have the old site up for about a month in parallel, once all business transactions have been tested straight through. At some point in time, it would definitely make sense to have a third party assess the site about all vulnerabilities that a good penetration tester can find (rinse, repeat often!). In most e-Commerce, wherever credit cards are accepted, the PCI-DSS standard needs to be fulfilled in order to avoid hefty fines and risks later. Make sure your new platform handles this issue and is compliant. PCI-DSS standard is available now in V3.

Last step is to do the deployment, the cut-over is done best in the wee hours of the day.

After a while it might make sense to check, whether the performance of the site is in line with expectation. Often, slow sites do not come from Performance-issues in the hosting or the network, but from heavy use of additional scripts, bells and whistles. This is a nice performance test, allowing to simulate a geography to access the service from: http://websitetest.com/

Some vendors offer automated tools for migration to their platform:

https://www.magentocommerce.com/magento-connect/oscommerce-migration-tool.html and

https://www.magentocommerce.com/magento-connect/cart2cart-oxid-eshop-to-magento-migration.html

http://store.shopware.com/swag00426/migration-von-anderen-shopsystemen-zu-shopware.html

are examples.

http://Shinetech.de

In a recent customer, we asked for some feedback on the migration process and found the following points to be of interest:

The customers legacy platform was rigid and did not support a lot of the state-of the art e-Commerce features. They also found a licensing issue that would have caused a budgetary change.

As the online e-Commerce presence was of central importance, they defined a phased strategy in the migration process from legacy to new:

– Build up a framework that covers around 70% functions and features of legacy and launched the store internally to collection feedbacks

– Improve the store and gradually route traffic from legacy to the new store

– Retire the legacy store and continuously improve the new store

The global retailer chose an “Agile” development method – this enabled them to implement the strategy mentioned above in a number of suitable sprints. And it proved being an efficient method as we can convert a still roughly defined idea into a prototype quickly, then fine-tune and improve it. This method dramatically shortened the TAT of feature delivery. In fact, it happened several times, that required features were delivered earlier than scheduled.

Choosing a strong and capable team with “can do” attitude was another key success-factor to ensure the project can be delivered on time with high quality. Shinetech was chosen and is proud about this project, the trust of the customer and the project success

Strong support from management gave the project team guidance and necessary support, to ensure no deviation to meet business goals. Being a global organization, communications among different levels and stakeholders were key, seamless and timely communication by all means (emails, meetings, demos, newsletter etc.) ensured a smooth project implementation without surprises.

Last but not the least, the chosen platform proved to be a flexible and robust software, that eased development complexity and enables future scalability along with business growth.

The management buy-in is central in e-Commerce migration projects and in our weekly management meetings communication was open; critical and helpful feedback was given at all times and by stakeholders all across Asia. As a result, the system implementation was fast. The migration project finished after 4 months, with system stability being extremely good. Did not face any critical issue since launch. Implementation cost and maintenance cost is reasonable.

This article has been published by my partner Shinetech here. Please give the site a visit!

Big Data, IoT and Healthcare: Part II

In the first part of this blog series, I covered how the healthcare industry can benefit from IoT and Big Data. In this week’s post I’ll take a more in-depth look into the potential security issues associated with Big Data and what the future holds for healthcare IoT.

From cars and hotels, to consumer goods like lightbulbs and watches there is a growing network of everyday objects connected to the Internet.  These sensors and devices generate nonstop streams of data that can improve personal and professional lives in many ways. However, with the billions of data generating devices being used and installed around the globe, privacy and data protection are becoming a growing concern.

Recent attacks by cybercriminals within healthcare sectors demonstrate that companies cannot ignore potential threats in their design or decision making processes. Just last year, as many as 80 million customers of Anthem, the United States’ second-largest health insurance company, had their account information stolen. The hackers gained access to Anthem’s computer system and stole names, birthdays, medical IDs, Social Security numbers, street addresses, e-mail addresses and more. This was the second major breach for the company.

It’s evident that there is a growing need to find a way to effectively manage privacy and security in Big Data. While there are innovative and accessible analyst tools such as MS Polybase and Impala, one of the challenges is hiring and retaining qualified data analysts.  Another challenge is the exponential growth of Big Data and the minimal structure, lack of standardization and lack of willingness to standardize.

So how do we address these challenges?  Businesses across all industries need an extensive platform that can manage both structured and unstructured data with security, consistency and credibility. A great example, and unexpected entrant into this niche market, is the SQL and Hadoop data warehouses offered by Microsoft.  These systems double-check validity, handle all types of data, and scale from terabytes to petabytes with real-time performance.

According to a new report, by 2020, the healthcare IoT market will reach $117 billion. Based on this report, one thing is clear: Aging and remote healthcare is going to be a demographic necessity rather than a mere opportunity. An example of where IoT/Big Data is making a difference is the innovative combination of connected healthcare devices and data sciences, such as fall detection alarms in elderly and home care situations.

As this trend continues, different sectors of the industry will merge and work together in order to deliver advanced digital solutions and embedded devices to the healthcare industry. As Big Data within healthcare IoT continues to grow, it will also lead to more and more job opportunities for developers with a knack for data science, data mining and/or data warehousing. With this need comes an opportunity for new businesses to emerge.

IoT and Big Data represent part of the fourth industrial revolution: The first being steam and mechanical production, the second division of labor, the third electronics and IT and the fourth being interconnected cyber-physical systems.

What does this mean for the healthcare sector? Recently, a company equipped their employees with Fitbit wearables and gathered mining data that was delivered by the wearable. From this experiment, they learned it was possible to reduce insurance premiums by $300k. By using predictive data from sensors and interconnected devices, GPs, hospitals, national health services and the pharmaceutical industry can create meaningful programs that shape the way patients are treated for years to come.

First published here: http://www.shinetechchina.com/blog/articles/big-data-iot-and-healthcare-part-ii

Why we endorse Antsle

Once in a while, things happen that have incredible potential, and a couple of months down the road, everyone thinks “I could have had this idea!”. Antsle is one of them.

Looking like a smallish box, it is actually a blue ocean. It is oriented towards the creative maker-power of the developers, not at the end-user in the first instance (in spite of the fact, that it is easy enough to satisfy these, too). Don’t confuse it with a storage device that carries the misleading name of “cloud”, this is something from a different galaxy.

Just thinking about the endless opportunities, this little sweet box which serves as your home-based internet node and server brings, waters my mouth. A couple of them come to mind immediately:

create a firewall for your IoT device collection

With the advent of all kinds of “smart” things populating your living rooms, from wearables to big-brother-TV-sets, we are all game for the predator-like data collectors. Why not have a control-unit developed which sits in your own cloud server at home in your basement or attic, that acts as a firewall for your IoT units that might send out tons of things you don’t want them to?

Hey, if all the flurry.coms of this world permanently want your information, why the heck don’t they pay you for your data? They could take every other installment for the new TV set, couldn’t they?

At least while your mobile is logged into your home network, you might cut the wire to these data-sucking hoovers that give a toss about your privacy.

Do your own home-automation locally without sharing stuff

A creative developer could come to the conclusion that home-automation is a great idea and having a look at the home via a security cam as well, without having the wish to “share” all this information with potential hackers and the manufacturers, Apple, Samsung, Sony and all kinds of interested third parties, including of course, potential burglars.

There are pretty clever approaches on the market and already enabled and integrated in a lot of connected devices.

However, all this assumes a limitless level of trust between the user and the manufacturers, as well as the cloud operators involveld.

As long as the Agile development cycle is not yet the standard, this trust is not justified at all. We live in a world, where the “Hello Barbie” might silently share kids’ secrets with third parties and without parents having control over what goes out of the home.

So why not put the gearbox for your home automation and connected life into your own hands? For a maker creator, the Antsle is a wonderful platform.

These are just some examples of ideas that come to mind immediately between day and dream.

Not a server, but an ocean

What dreams may come, when the creative folks find this platform and make innovative use of it?

The antsle has the capability to be the gearbox between the SOHO office or home and the world, bypassing the vampires and vultures of the internet and sharing only what was meant to be shared, in a conscious decision process and keeping in store the earlier decisions and making sure the home or office is connected, but not as an open system, but as a controlled perimeter.

The use cases are myriad. The makers can count on many things which are easy to plug into with their ideas and easy to realize. So…..

Antsle is a platform that supports all kinds of small applications running independently from each other in a server-type environment in a coll alumnium box without noise. It is the basis for developing loads of things that can be marketed in an ecosystem.

A good example for this kind of ecosystem is Sonos. They developed a meshed-WiFi-music system, that has since grown into an impressive ecosystem, due to the quality and absolute ease of use and administration. Loyal followers (like the early Apple users) due to a strong committment to backwards compatibility and superior quality of build. This is the basis for makers. This is the mindset of creators.

What’s missing?

Of course, there are always compromises when starting something new. It would be terriffic, if there was a forum or platform, where like-minded developers could meet with regards to the antsle and share ideas, get together, inspire each other, share solutioins for puzzling questions and exchange open source code.

This could be something easily created on one of the new antsles, but of course, it’s not yet there.

Also, while the antsle smart OS is a great idea and has a lot going for it, I would have simply loved to see the concept of qubes-OS taken into the commercial world. Joanna Rutkowska deserves to be lauded for an innovative idea and it could basically be the fifth gear for a security-antsle in the future… Folks, how about this for the next release? Maybe someone in antsle is interested to explore this….

On the other hand, the OS choice is with the creators and it makes a lot of sense in many respects – the team at antsle have researched the necessities down to the question which RAM to use (only ECC will do) and how to assure data integrity and safety (by mirroring SSDs). So they probably have spent months on analysing what would be the best basis. But we do have a lot of sympathy for Joanna.

December 15th, crowdfunding for antsle will start. We will order our first antsle right away and contribute to the crowdfunding. We do hope the spirit of innovation will become inspiring for the developer community and are anxious to see the next news about it.

(Antsle is THE solution for Autonomous Web Hosting! Own your data, run hundreds of VM’s – all in one powerful and 100% silent home-based server.)

Go see: Autonomous Web Hosting – antsle.com and get it while they are hot and new! Be one of the first and be one of the early flock!

Offshoring for Innovators

Gone are the days when software development was done only in Palo Alto or by corporate America within the glass towers of companies such as Sun and Oracle. With the right offshore outsourcing partner, any innovator with a market-worthy software idea can be successful.

Case in point: For example, more than 50 percent of the companies showing interest in working with our partner Shinetech (check them out, Shinetech are one of the fastest growing private companies, according to the Inc. 500|5000 list: www.shinetechchina.com) are small and midsized innovators that have disruptive software ideas. By outsourcing their development to companies such as Shinetech they can develop those ideas quickly and affordably. They don’t have to be experts in software development, IT platforms, security, login procedures, HTML or CSS. They simply need good outsourcers who can provide this expertise for them.

Engagement simplicity

Experience shows a need for a unique approach to outsourcing for innovators. We provide innovators with an engagement manager who has vertical industry knowledge with an offshore development team that knows how to put the requirements into action.

With our service, innovators can structure and prioritize their requirements in their natural language with an engagement manager who then passes the information to the offshore team. Since the engagement manager works with both teams, he acts as scrum master for the offshore team and the owner of the start-up product.

Agile efficiency

The Agile process that we employ makes developing software fast and easy. The behavior-driven development process enables quick deployment of new applications even before all the imagined functionalities are implemented. It also improves efficiency by eliminating time needed for reworks.

In our Agile approach, we cut down on the time it takes for specification writing and quality assurance cycles by using tools such as Gherkin, Cucumber and Specflow, in conjunction with Ruby or .NET.

Development affordability

We do more than most offshore developers to help young innovators get their software ideas to market. Our money-back guarantee and free trial option keeps it affordable and minimizes their financial exposure.

And we employ a wealth of open source elements and ready-made development tools in order to control costs and speed the process. This ensures that innovative ideas can be realized much more quickly than initially imagined—and it’s how we have helped many innovators achieve success in their start-up endeavors.

Software development has indeed broken free from the laboratories of large corporations. With the affordable, accessible software development skills available, anyone who has an innovative software idea can succeed.

LTE for Critical Communications – not yet there.

The 3G4G Blog: LTE vs TETRA for Critical Communications.

 

Great opinion-piece to read by knowledgeable people.

When dealing with mission critical communications, regardless whether in the public or private sector (think remote healthcare for voice and short data), LTE is not yet where it needs to be.

 

The 3G4G Blog: LTE vs TETRA for Critical Communications

The 3G4G Blog: LTE vs TETRA for Critical Communications