Please Login

Network Issues

Issue with DAT to Evoice servers (Resolved)

Affecting Server - Evoice VoIP Service | Priority - Critical

We are having some issues with the hosted PBX on servers
Engineers are investistigating

UPDATE - 14:36 - Engineers found the casue and it seems some data corruption happened on the HAAS envierrment effecting Engineers are trying to fix the issue
More update to come

Update - 14:45 - Hosted PBX should be up now and fully functioning. Date - 14/12/2018 14:28 - 08/03/2019 16:31
Last Updated - 14/12/2018 14:47

Openetworks planned network (Resolved)

Affecting Other - Upgrade | Priority - Critical

We would like to advise that Openetworks will be doing an upgrade in their Brisbane Data Center.
This Upgrade will effect all customers that have services on the ONU

During this time customer on the Openetworks infrastructure will no be able to make or receive any calls

NOTCED issued by Openetworks

Please be advised there will be a planned network outage for approximately 7 hours from 11:00pm Wednesday November 28th until 6:00am Thursday November 29th.

This outage will affect all RSP Broadband Data and Voice services at all Queensland sites.

Furthermore, any OPENetworks provided services, e.g. Wi-Fi services, will also be affected at all sites Australia wide.

This outage is necessary to perform an upgrade to carrier equipment within the Brisbane data centre.


Please visit the Network Status page on our website to obtain the latest update on this outage at:

Apologies for any inconvenience this outage causes and thank you for your understanding.



NOC OPENetworks Pty Ltd

Date - 28/11/2018 23:00 - 07/12/2018 19:22
Last Updated - 23/11/2018 18:04

Migration to new Hardware (Stage 1 of 2) (Resolved)

Affecting Server - Evoice VoIP Service | Priority - Critical

Stage 1 of 2 Migrating to High Availability and DRaaS Solution between 2 data centres (Equinix 1 and Equinix 2)


At midnight the 15/08/2018 Evoice SIP Softswitch will be migrating to a new hardware with extra redundancy between EQUINIX 1 and EQUINIX 2 in case of hardware failure in one data centre.

There will be some downtime involved of approx. 30 minutes.


In the last 2 weeks we have managed and prepared the new platform with continuous synchronize of the old platform with the major bulk of the data already moved.

The last step is to move IP addresses and sync.

Customers does not need to do anything as everything should come back normal after the downtime.

Stage 2 will involve the same for our wholesale system but this is planned for one week after.

UPDATE - 00:05 Migaration started

Update - 01:12 am Migarting Finished and all services online as normal


At midnight the 22/08/2018  at 00:01 am our Wholesale / Main SIP Softswitch will be migrating to a new hardware with extra redundancy between EQUINIX 1 and EQUINIX 2 in case of hardware failure in one data centre.

There will be some downtime involved of approx. 30 - 60 minutes.

UPDATE 21/08/2018 21:40 Stage 2 Migration moved to 23/08/2018 at 00:001am

UPDATE 23/08/2018 00:05 Migration Started

UPDATE 23/08/2018 00:35 am Migarting Finished and all services online as normal

Date - 15/08/2018 00:01 - 27/08/2018 18:10
Last Updated - 23/08/2018 00:57

Outage in data center (Resolved)

Affecting Server - Business Service | Priority - Critical

Current Status: Service Disruption

Started: 13/7/2018 9:39am (+1000)



Affected Infrastructure:

Components:  Cloud Services, Colocation, Dedicated Servers, Infrastructure, Managed Wordpress Hosting, Network, Service Desk, xDSL/EFM

Locations:  Brisbane, Brisbane - NextDC B1, Brisbane - NextDC B2, Brisbane - Syncom BNE, Melbourne - Equinix ME1, Melbourne - NextDC M1, Melbourne - Vocus, New Zealand, Perth - Vocus, Sydney, Sydney - Equinix SY1, Sydney - Equinix SY3, Sydney - Equinix SY4, Sydney - Global Switch, Sydney - NextDC S1, Sydney - Syncom SYD1, Sydney - Syncom SYD2, Sydney - Vocus


Update: Network engineers are currently investigating network interruptions.

Further updates will be provided when possible.

Status Page:

UPDATE 10:08

Update: Engineers have confirmed connectivity has been re-established network-wide.

A PIR for this incident will be provided ASAP.

Please contact our helpdesk if you experience any further issues with your services.

Status Page:

UPDATE 10:12

Update: Some services are still affected by this and currently being worked on.

Additional updates to come.

UPDATE 10:42

Engineers are still working on restoring services.

Another update will be provided shortly.

UPDATE 13:57

Issue has been resloved from SAU at 11:39am.
We have been monitoring since then and no issues reported

UPDATE 15:57

We just received a NOTIFICTION from SAU that they need to Reboot certain services at 16:30
This will take approx 3 mins to complete.
After this reboot  we are told that any other issues will be resolved.

UPDATE 16:10

Update: Engineers will be implementing a fix to resolve packet loss and latency issues at 4:30 PM AEST.

Between 4:30 PM and 4:40 PM clients may experience up to 3 minutes of downtime to services while engineers apply a fix to some devices in the Sydney, Melbourne and Perth network.

Date - 13/07/2018 09:43 - 17/07/2018 12:58
Last Updated - 13/07/2018 18:09

Outbound Call Issues on one of our carriers (Resolved)

Affecting Server - Evoice VoIP Service | Priority - Critical

Outbound Call Issues on one of our carriers started at 12:01pm

We have found that one of our outbound carriers was not terminating any calls or they had huge delay to conect and they were not sending any error back to us but instaed they were sending us that the call is being progressed and thus our second and third carrier could not kick in.

We have now eliminated this one carrier out of the mix at 12:40pm and now all outbound calls are conecting fine and no delay.

We also have requested this specific carrier for a please explain and what will be done moving forward so this issue will not happen again.
If we are not satified with the outcome then this carrier will be dropped permanetly.

We appologise for this mishap
. Date - 17/05/2018 12:01 - 13/07/2018 09:44
Last Updated - 17/05/2018 13:00

Emergency Vlan setting change required (Resolved)

Affecting System - / 2 / 9 / 10 | Priority - Critical

Start time - 23/04/2018 22:00
Finish - 23/04/2018 22:30

We would like to advice everyone that the data center where all our cloud services are, needs to do some EMERGENCY work on our VLANS.

Due to this change there will be an outage of approx 30 to 60 seconds.
While this is happening no calls will be able to be made or received,

This is required, as one of the seniour enginners at SAU have found on all IP's on a particular ingress point to the network are not being correctly leaked out of the peering VRF into the switching fabric. This basically means that the IP's are taking an older, slower path that's showing packet-loss under a certain condition.

UPDATE 22:09 - Starting the changes now

Update - Work finished at 22:12 Date - 23/04/2018 22:00 - 17/05/2018 12:51
Last Updated - 24/04/2018 10:32

Maintenance Required by software vendor (Resolved)

Affecting Other - All services | Priority - Critical

Maintenance Required by software vendor

Last month there was a software patch to our cloud instances which we thought was fixed, it's now been brought to our attention there needs to be another patch to ensure stability is maintained.


So now we are booked in for this Friday night 9PM to 11PM the same time, we finally have a solution patch from the software vendor which has been tested thoroughly in a replicated Dev cloud environment. 

Again it will be around 15 minutes from start to finish

We do apologize for any inconvenience caused

UPDATE 22:45 - Maintenance Finished. Total down time 10 mins, everything back online

Date - 15/12/2017 22:30 - 15/12/2017 22:45
Last Updated - 16/04/2018 19:16

Emergency Maintenance Evoice Stsrem (Resolved)

Affecting Server - Evoice VoIP Service | Priority - Critical

There will be some interuption on the evoice system to reselove an issue that should clear the issue we had last Friday 13/04 at 16:00

Engineer will be working on the cloud for approx 30 mins between 21:00 and 22:00 and there could be some interuption with incoming and outgoing calls and should not last more than 10 minutes.

We appologize for this inconvenience caused. Date - 16/04/2018 21:00 - 23/04/2018 14:52
Last Updated - 16/04/2018 19:15

Packet Loss on our links (Resolved)

Affecting Server - Business Service | Priority - Critical

We are seeing random packet loss on our links and have reported it to the data center. We were advised that they are trying to mitigate the issue and clear all the random packet loss

More updates to come

UPDATE 16:29 - The data center is trying to advertise our IP range out of Vocus

UPDATE 12/03/2018 17:00 - Issue now fully resolved. The issue was randmly packet loss of an interval of approx 2 hours  for a duration of 2-3mins on AS 45671 via Vocus. Now AS 45671 has been routed via Equinix, you will only see vocus if your provider is peering with in brisbane but in saying that the packet loss is not happening anymore via vocus peers in the last 24hrs. Date - 08/03/2018 15:00 - 12/03/2018 17:00
Last Updated - 12/03/2018 17:07

Migrating of Prifixes to new switch (Resolved)

Affecting Server - MOR | Priority - Critical

Tonight we will proceed in moving our prefixes over to a new switch to where the voice cloud servers are, and from a core device we are looking at decommissioning in the future.

This will take the below prefixes offline for up to two minutes while they are dropped and picked back up in OSPF on the new switch:

Date - 05/12/2017 21:00 - 14/12/2017 15:06
Last Updated - 05/12/2017 16:15

« Prev Page   Next Page »

View RSS Feed

Quick Navigation

Client Login



Remember Me


Follow Us on Twitter

Copyright © WorldDialPoint 2002 - All Rights Reserved

Terms & Conditions | Privacy

Designed by