Kubernetes running on Google Cloud is our go-to deployment architecture, also known as the Google Container Engine (or GKE for short). We have help ...
(Joomla / Wordpress)
The huge undertaking that is Crossrail pushes that limits of engineering and technology alike this brings its own special set of challenges. One of these challenges was monitoring all the existing city infrastructure to detect any geographic changes due to the tunneling operations. As the TBM (Tunnel Boring Machine) passes under the city this causes the earth around it to move normally sinking down by a small degree. This has to be closely monitoring so that the engineers can react should there be a deviation from the expected movement. Certain areas where particularly valuable like old protected buildings and other transportation infrastructure.
Due to the scale of this task an automated system was require to do most of the work. The team had installed a series of RTS (Robotic Total Stations) and attached 10,000’s of prisms. The RTS would automatically measure the distance between all the prisms and send the data back over GPRS predefined schedule. This means the engineers where always being feed live up to date monitoring data of city which could be used to calculate any movement and trigger alarms if necessary.
Crossrail had purchased software to ingest and analysis the data so that they engineers could view it on a map. This work well until the sensor network grow to such an extent that the system could not process the amount of data it was fed in a timely manner. This ment the engineers where having to wait and increasing amount of time to be able to view the most up to date data. It is work mention that at it peak the Crossrail sensor network was the second largest in Europe the only larger one was at the LHC in Cern.
Working with their team of Geotechnitions and in house developers. We were able to a number of major improvements to their computing infrastructure and software performance.
Crossrail where hosting all their own servers in data center just outside of London this was originally provisioned by the initial contractor. However much of the hardware was already at end of life with another 3 years of the project left. The decision was main to upgrade. We help design and install a brand new highly fault tolerant private cloud setup.
We found that there where a number of small improvements that could be made with the database to help the performance. Most importantly we were able to pool of database servers to provide high resilience and scalability. We did this by offloading heavy reads to a number of database replicas.
One of the biggest bottlenecks we found with the existing system was the single threaded approach to importing. The import mechanism was also susceptible to getting blocked by a bad data file. We create a new import system using highly optimized methods of data conversion which could be scaled horizontally so during time of high demand new process could be provisioned to cater for the demand. As part of this new import system we also had full error handling should any bad files make it in.
After these improvement Crossrail was able to delivery as near to real time data to their engineers. This mean that potential problems could be detected and dealt with faster before they escalated. With the prospect of Crossrail 2 they will be able to take this optimized technology stack forward and using it straight of the bat.
The platform has seen huge success which resulted in a lot of performance issues around serving the dynamic volunteering tasks. Amnesty realized in order to increase the performance of the platform they would have to redesign the engine that managed the tasking.
What is Microtasking: https://en.wikipedia.org/wiki/Microwork
When reviewing the issues with the Amnesty Decoder platform it was clear that the core issue was due to the database being under heavy load from all the requests for assets and collection of data. They were using an off-the-shelf product that wasn’t design for such a large scale, and running it on a generic LAMP stack with no scaling capability.
Our solution was to take their existing components, turn them into services we could scale within the cloud and then create a bespoke API that could act as the engine to run the tasking and authentication.
Amnesty was already using Azure for deployment of their current systems. So they had the ability to scale, but didn’t possess a orchestration system to scale automatically. With the help of their internal DevOps team we took their existing components and containerized them while adding some additional services to manage shared caches.
This then allowed Amnesty to automatically add more nodes to the cluster to scale the services like MySQL, PHP or Nginx.
It was very important that tasking system would collect and save the data quickly as possible. Our initial review of the off-the-shelf tasking system showed we would still suffer performance issues with scale even if we scaled the resources. Because of this problem we created a brand new bespoke API which was designed to solve the issue. This new API wrapped a open-source project called Hive which used ElasticSearch as a database with tasking logic written in GoLang for speed. This would then pass through a PHP API allowing the addition of authentication using Oauth 2.0.
Our changes resulted in a system that could support high numbers of requests and would horizontal scale with more nodes. Giving Amnesty a solid base to continue running digital crowd volunteering into the future.
Amnesty managed to run their next campaign with 20,000 users and close to 250,000 tasks being complete.
It took Team Focus many years to perfect their testing platform this was done by very close collaboration between their developers and psychiatrists.
The platform itself started out as a simple set of Perl scripts and grew into a fully fledged testing platform able to serve many complex tests to users around the world. However while the system was evolving so too was the internet and all the tools and items associated with it. The platform was able to be adapted to make use of many of these industry enhancements but it was getting much more difficult to do.
Not only was the application becoming much harder to maintain it was also becoming more difficult to find developers with Perl experience and who wanted to continue developing in it.
Our approach was to develop a new test framework based on a modern technology stack that we would design for the ground up to handle the known complexities of the current system. This was designed to run side-by-side with the current system and allow for development while not impacting existing business processes.
We devised a new system broken into three key components.
The API provides a Restful interface into existing data as well as a platform from which to build out any new data requirements. It also provided the ability to allow to allow for third party integrations. The technologies employed here where Laravel, MySQL and MongoDb.
The new Test Presenter platform gives Team Focus the ability to reliably create, update and report on new sets of high quality, highly accessible online tests. It also provides them with a robust framework for developing new more advanced testing scenarios.
As well as designing and developing this new platform we also helped them move away from a dedicated server hosting solutions to a full cloud based infrastructure. Making use of Kubernetes which allows them to scale automatically with customers demands.
Web Development Solutions.
We've helped companies around the UK increase their profitability with custom web applications. We can help you do the same.