All services are online.No open issues , last issue was 19 hours ago on Brixly - Main Website - London, UK (Digital Ocean).
Uptime / Load May 20 May 21 May 22 May 23 May 24 May 25 May 26
Uptime Monitor
Apollo 99.948%
Alfa 99.998%
Albany 99.999%
Bravo 99.95%
Charlie 99.996%
Delta 100%
Echo 100%
Golf 100%
Hotel 100%
Indigo 99.999%
Juliet 100%
Kilo 100%
Lima 100%
Mike 99.997%
Millennium 99.996%
Zulu 100%
cPanel - UK
alfa.cloudns.io 0.24
charlie.cloudns.io 0.53
delta.cloudns.io 0.47
echo.cloudns.io 0.03
golf.cloudns.io 0.28
hotel.cloudns.io 0.31
indigo.cloudns.io 0.43
juliet.cloudns.io 0.32
kilo.cloudns.io 0.29
lima.cloudns.io 0.58
mike.cloudns.io 0.47
cPanel - USA
albany.cloudns.io 1.1
millennium.cloudns.io 0.17
Plesk - UK
zulu.cloudns.io 0.4
Elastic Cloud
phantom.cloudns.io 0.12
phantom-slave.cloudns.io 0.27
Backup Servers
backups.cloudns.io 0.16
backups01.cloudns.io 0.71
Nameservers / DNS
ns1.cloudns.io 0.02
ns2.cloudns.io 0.03
ns1.usa.cloudns.io 0.05
ns2.usa.cloudns.io 0.05
Internal
Brixly - Main Website - London, UK (Digital Ocean) 99.865%
Brixly - Client Area - London, UK (Digital Ocean) 99.963%
No/minor issues (>99.9% uptime) Short outage (>99% uptime)
Issues reported
Major outage (<99%) High Load Average

Incidents

May 17th, 2019

Bravo - Server Migration
Resolved

Dear Client / Reseller,

We are pleased to confirm that the migration has now been completed over to the new data center.

As per our previous correspondence, there is a change of IP address...

81.19.215.13 - bravo.cloudns.io

Please see our earlier replies in regards to the nameservers, which in most cases will remain the same. Full detail was provided previously.

If you are using external DNS, such as Cloudflare you must now update to the above IP address.

We will keep the old server active for 24 hours, so please action the above as soon as possible to avoid interruption.

Kind regards,

Brixly Support

Identified

The migration of bravo will continue tonight, following the postponement late last week

Confirmation will be sent out once the migration work has completed

Monitoring

The migration process from the old hardware, to the new hardware is still underway. However, due to the extended period of time taken, we are looking to ammend the 'switchover' process, and will re-sync the contents between the servers before go-live.

This will be ongoing until later this evening.

Monitoring

The migration scheduled in for Bravo over to the new DC is underway, however due to the latency between the two physical machines and the sheer size of the transfer, this is expected to take some time to complete.

Given that this is the case, we are not proceeding with the Express Transfer, and will instead await completion of the data transfer before switching DNS.

Further updates will be provided here as we progress.

It's expected the migration of the data will be ongoing until the morning. As such, data will be re-synced post live.

Support / Client Area Maintenance
Resolved

We will be performing some maintenance to our client area over the next few hours. As such, downtime of the client area is expected.

Support can be raised directly by email to support@brixly.uk until the client area is back online.

May 8th, 2019

Phantom - Rebuild Required
Resolved

The Phantom server, which forms part of our Elastic Cloud is currently offline.

Please note, with the Elastic Cloud sites will be loading from a slave server until the issue with Phantom is resolved.

If your DNS is hosted externally, then you most likely aren't benefiting from the failover. Instead, please ensure the following IP address is used...

5.101.146.102

We will be rebuilding the machine entirely, and restoring from the latest backup (hourly). We will then re-sync the data from the slave server to ensure that no data is lost during this process.

Zulu - Memory Resize
Resolved

Zulu is now back online

Resolved

We have rebooted Zulu, adding additional RAM - the server will be back online shortly

Indigo - Kernel Update
Resolved

All services will be online over the next 60 seconds or so.

Monitoring

Rebooting Juliet for a kernel update

Resolved

The system is now back online

Resolved

We're rebooting the Indigo server for essential maintenance / kernel upgrade.