• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.

DBAss Indian FTrash CEO Fingers IBM!

makapaaa

Alfrescian (Inf)
Asset
Joined
Jul 24, 2008
Messages
33,627
Points
0
<TABLE border=0 cellSpacing=0 cellPadding=0 width=452><TBODY><TR><TD vAlign=top width=452 colSpan=2>Published July 14, 2010
c.gif

</TD></TR><TR><TD vAlign=top width=452 colSpan=2>DBS blames IBM, braces for backlash from MAS
Botched hardware repair job caused outage; MAS to look at action against bank

By WINSTON CHAI

(SINGAPORE) DBS Bank has squarely blamed a botched hardware repair job for causing last week's widespread service outage. But the explanation may not shield the bank from a backlash, with Singapore's financial regulator saying that it would look at what action to take.

The Monetary Authority of Singapore (MAS) will review DBS's investigation report on the incident and assess 'the extent to which the bank has failed to meet the recommended standards set out in the Internet Banking and Technology Risk Management Guidelines before determining the appropriate regulatory action to take'.
While DBS Group chief executive Piyush Gupta again appeared to lay the blame for the outage on IBM - which handles its network and mainframe functions - DBS may still have to take the rap.
'A bank's responsibilities and accountabilities are not diminished or relieved by outsourcing its operations to third parties or joint-venture partners,' the MAS guidelines clearly state.
Even so, Mr Gupta tried to shed some light yesterday on the mystery behind the malfunction on July 5 that hit the bank's ATMs, Nets and credit card payment services, and online banking for more than seven hours.
In a letter to the bank's customers yesterday, Mr Gupta said: 'The outage last week was triggered during a routine repair job on a component within the disk storage sub-system connected to our mainframe.'
This confirms a BT report last Friday that a mainframe-related glitch crippled key DBS services.
Mr Gupta says IBM made a 'procedural error' while replacing a defective storage component at 3am on a Monday, the off-peak period when maintenance is usually carried out.
The company's service crew relied on outdated procedures to carry out the repairs, he says.
DBS outsourced its network and mainframe functions to IBM under a 10-year, $1.2 billion deal in 2002.
Due to an oversight, what would typically have been a routine replacement eventually escalated to become a complete systems outage, Mr Gupta explains in his letter to customers.
'On hindsight, our internal escalation process could have been more immediate. We could also have done more to mobilise broadcast channels to inform customers of the disruption in services first thing in the morning.'
Questions remain as to why DBS's system redundancy and other fail-safe mechanisms apparently did not kick in to cushion the fallout.
Large corporations - especially banks and government departments - typically design their key technology systems with redundancy features to spread the workload to other machines in case one fails. Some servers and mainframes even boast 'hot swap' components that can be replaced on the fly without having to restart machines.
These features are usually complemented by a disaster recovery plan to ensure important data can be recovered and operations can resume in the event of an emergency.
According to Mr Gupta, the bank's disaster recovery command centre was activated at 6.30am after several hiccups in the repair process were experienced.
'By 8.30am, we knew that our branch and ATM systems could be restored from 10am onwards and therefore did not need to invoke full disaster recovery measures,' his letter says. 'All other services were progressively restored through the morning and virtually everything was back on track by lunchtime.'
DBS was instructed by MAS to explain the outage to consumers and outline the action it would take to avoid a repeat.
'MAS has informed DBS Bank's senior management that we are seriously concerned with the wide disruption of the bank's services on Monday, July 5, 2010,' the authority said in a statement issued on the heels of DBS's letter to customers.
While the bank braces for a regulatory backlash, IBM could be staring at a reduction in the maintenance fees it receives from DBS.
'In mainstream outsourcing contracts, Gartner recommends that clients identify key service levels which have a material impact on the business performance of the bank,' Jim Longwood, Gartner's research vice-president for IT sourcing, told BT.
'Against this small number of service levels, we normally recommend a monthly rebate of fees of the order of 10-20 per cent for the affected service area, should the service provider fall well below the expected service level.'
Phil Hassey, a vice-president with Springboard Research, said: 'They (penalties) can vary from a literal slap on the wrist to contract termination, depending upon the intensity and impact of the SLA (service level agreement) breach or violation.'
He added: 'The focus in most mature relationships is on the cure rather than the cause . . . Having said that, in the case of DBS, the breach is significant, and IBM is the primary service provider in this instance.'

<SCRIPT language=javascript> <!-- // Check for Mac. var strAgent; var blnMac; strAgent = navigator.userAgent; strAgent.indexOf('Mac') > 0 ? blnMac = true:blnMac = false; if (blnMac == true) { document.write('
'); } //--> </SCRIPT><TABLE border=0 cellSpacing=0 cellPadding=4 width=300 align=center><TBODY><TR><TD vAlign=top align=middle>
greenline.gif

adgrey.gif

<SCRIPT language=JavaScript1.1 src="http://adtag.asiaone.com/tag/bt/js/bt_imu.js"></SCRIPT><SCRIPT type=text/javascript src="http://ad.doubleclick.net/adj/BusinessTimesOnline/homepage;pos=1;adtype=1;adtype=2;adtype=3;adtype=4;adtype=5;tile=5;sz=300x250;ord=1751431326171763?"></SCRIPT><!-- Copyright DoubleClick Inc., All rights reserved. --><!-- This code was autogenerated @ Mon Jan 25 22:46:55 EST 2010 --><SCRIPT src="http://s0.2mdn.net/879366/flashwrite_1_2.js"></SCRIPT><OBJECT id=FLASH_AD classid=clsid:D27CDB6E-AE6D-11cf-96B8-444553540000 width=300 height=250>
























<EMBED src="http://s0.2mdn.net/2389021/ocbc_a15475_2010Jan26_300x250.swf?clickTag=http%3A//ad.doubleclick.net/click%253Bh%253Dv8/39d6/3/0/%252a/n%253B222090477%253B0-0%253B1%253B45072177%253B4307-300/250%253B35258671/35276489/1%253B%253B%257Esscs%253D%253fhttp%3A//www.relax.com.sg/relax/static/titaniumtravel.html" quality=high wmode=opaque swLiveConnect=TRUE WIDTH="300" HEIGHT="250" bgcolor=# TYPE="application/x-shockwave-flash" AllowScriptAccess="never"></EMBED></OBJECT><NOSCRIPT></NOSCRIPT>
greenline.gif
</TD></TR></TBODY></TABLE>
</P></TD></TR></TBODY></TABLE></P>
 
<TABLE border=0 cellSpacing=0 cellPadding=0 width=452><TBODY><TR><TD vAlign=top width=452 colSpan=2>Published July 14, 2010
c.gif

</TD></TR><TR><TD vAlign=top width=452 colSpan=2>MAS response

THE following is the response from MAS to DBS chief executive's letter to customers:

'MAS has informed DBS Bank's senior management that we are seriously concerned with the wide disruption of the bank's services on Monday, 5 July 2010. As soon as we were aware of the problem, we instructed DBS to conduct a thorough investigation to identify the cause of the failure, as well as to ensure that adequate measures were implemented to rectify the failure and mitigate the inconvenience to customers. We have also instructed DBS to give a full account of the incident to the public, including the actions it will take to prevent future recurrence.
'MAS has noted DBS's explanation for the failure and will review its investigation report on the incident. MAS will also assess the outcome of this investigation and the extent to which the bank has failed to meet the recommended standards set out in the Internet Banking and Technology Risk Management Guidelines before determining the appropriate regulatory action to take.'
</TD></TR></TBODY></TABLE>

yawn.jpg
 
THIS CB Bank of POSB and DBS had failed it's people by recruiting so
many Indians in all it's department to replace Singaporeans .
And lose so much money in the process of failed investment funds .
DO not expect no repercussion .
 
<TABLE border=0 cellSpacing=0 cellPadding=0 width=593><TBODY><TR vAlign=top><TD><TABLE border=0 cellSpacing=0 cellPadding=0 width=452><TBODY><TR vAlign=top><TD>Top Print Edition Stories
</TD></TR><TR><TD vAlign=top width=452 colSpan=2>Published July 14, 2010
c.gif

</TD></TR><TR><TD vAlign=top width=452 colSpan=2>LETTER FROM DBS CHIEF EXECUTIVE
</TD></TR><TR><TD vAlign=top width=452 colSpan=2>So sorry for the inconvenience - here's what happened


THE following is the text of the letter from DBS chief executive Piyush Gupta:

Dear DBS and POSB Customers,
I am writing to personally apologise to you for the inconvenience caused by the sudden disruption in our banking and ATM services from 3am to 10am, Monday, July 5, 2010. You have every right to expect uninterrupted services 24/7, 365 days a year from us, and I am sorry we have failed you on that count. This is the first time we have experienced a system outage and service disruption of such magnitude, so please allow me to explain what happened.
The outage last week was triggered during a routine repair job on a component within the disk storage sub-system connected to our mainframe. This component was emitting alert messages, indicating that there could be an intermittent problem. As our IT environment is highly resilient, and as the banking system was still fully functional, the problem was classified as 'low severity'.
A component replacement was scheduled for 3am, a quiet period, which is standard operating procedure. Unfortunately, while IBM was conducting this routine replacement, under the guidance of their Asia Pacific team, a procedural error inadvertently triggered a malfunction in the multiple layers of systems redundancies, which led to the outage. The IBM Asia Pacific team is the central support unit for all IBM storage systems in the region, irrespective of whether the installation has been outsourced or is being managed in-house.
I am treating this matter with utmost priority and the full-scale investigation that we initiated last week is still under way. This investigation is being done with the support of IBM's labs in the US and their engineering teams in Asia. So far, we understand from IBM that an outdated procedure was used to carry out the repair. In short, a procedural error in what was to have been a routine maintenance operation subsequently caused a complete system outage.
We take full responsibility for this incident. The matter is obviously of grave concern to us and we are working closely with IBM to ensure that such lapses do not recur or cause such significant impact. In fact, 12 months ago DBS commenced work on a major two-year program to further strengthen the resiliency of our system and minimise the risk of service disruptions.
Please rest assured that all payments and transactions that were scheduled to be made on July 5 were completed. Nothing was held over and full data integrity was maintained at all times. When the system was down, our priority was to minimise customer inconvenience. We:
Allowed all cheque encashments up to $500. In fact, we encashed almost 1,700 cheques worth over $500K between 8.30am and 10am;

Provided frequent situation updates to the media, posted updates on our website and also alerted our staff accordingly so that they could inform their customers; ----> REALLY???

Contacted over 10,000 customers via phone/SMS to inform them when services were restored;

Kept our branches open for an additional two hours on the evening of July 5.
IBM informed us of the system outage at 3am and a technical command function comprising DBS and IBM staff was activated by 3.40am. A restart of the systems was initiated at 5.20am. Following complications during the machine restart, at 6.30am we activated our bank-wide disaster recovery command centre. By 8.30am, we knew that our branch and ATM systems could be restored from 10am onwards and therefore did not need to invoke full disaster recovery measures. All other services were progressively restored through the morning and virtually everything was back on track by lunchtime.
On hindsight, our internal escalation process could have been more immediate. We could also have done more to mobilise broadcast channels to inform customers of the disruption in services first thing in the morning. Once again, please accept my apologies and know that we take full responsibility for this incident. There have been valuable lessons learned (---> sounds familiar???) and I assure you that this matter will continue to remain a top priority for me. My colleagues and I are doing everything we can to prevent an incident of this scale from happening again.
Yours sincerely,
Piyush
</TD></TR></TBODY></TABLE></TD></TR></TBODY></TABLE>
 
<TABLE border=0 cellSpacing=0 cellPadding=0 width=593><TBODY><TR vAlign=top><TD><TABLE border=0 cellSpacing=0 cellPadding=0 width=452><TBODY><TR vAlign=top><TD>Top Print Edition Stories
</TD></TR><TR><TD vAlign=top width=452 colSpan=2>Published July 14, 2010
c.gif

</TD></TR><TR><TD vAlign=top width=452 colSpan=2>LETTER FROM DBS CHIEF EXECUTIVE
</TD></TR><TR><TD vAlign=top width=452 colSpan=2>So sorry for the inconvenience - here's what happened


THE following is the text of the letter from DBS chief executive Piyush Gupta:

Dear DBS and POSB Customers,
I am writing to personally apologise to you for the inconvenience caused by the sudden disruption in our banking and ATM services from 3am to 10am, Monday, July 5, 2010. You have every right to expect uninterrupted services 24/7, 365 days a year from us, and I am sorry we have failed you on that count. This is the first time we have experienced a system outage and service disruption of such magnitude, so please allow me to explain what happened.
The outage last week was triggered during a routine repair job on a component within the disk storage sub-system connected to our mainframe. This component was emitting alert messages, indicating that there could be an intermittent problem. As our IT environment is highly resilient, and as the banking system was still fully functional, the problem was classified as 'low severity'.
A component replacement was scheduled for 3am, a quiet period, which is standard operating procedure. Unfortunately, while IBM was conducting this routine replacement, under the guidance of their Asia Pacific team, a procedural error inadvertently triggered a malfunction in the multiple layers of systems redundancies, which led to the outage. The IBM Asia Pacific team is the central support unit for all IBM storage systems in the region, irrespective of whether the installation has been outsourced or is being managed in-house.
I am treating this matter with utmost priority and the full-scale investigation that we initiated last week is still under way. This investigation is being done with the support of IBM's labs in the US and their engineering teams in Asia. So far, we understand from IBM that an outdated procedure was used to carry out the repair. In short, a procedural error in what was to have been a routine maintenance operation subsequently caused a complete system outage.
We take full responsibility for this incident. The matter is obviously of grave concern to us and we are working closely with IBM to ensure that such lapses do not recur or cause such significant impact. In fact, 12 months ago DBS commenced work on a major two-year program to further strengthen the resiliency of our system and minimise the risk of service disruptions.
Please rest assured that all payments and transactions that were scheduled to be made on July 5 were completed. Nothing was held over and full data integrity was maintained at all times. When the system was down, our priority was to minimise customer inconvenience. We:
Allowed all cheque encashments up to $500. In fact, we encashed almost 1,700 cheques worth over $500K between 8.30am and 10am;

Provided frequent situation updates to the media, posted updates on our website and also alerted our staff accordingly so that they could inform their customers; ----> REALLY???

Contacted over 10,000 customers via phone/SMS to inform them when services were restored;

Kept our branches open for an additional two hours on the evening of July 5.
IBM informed us of the system outage at 3am and a technical command function comprising DBS and IBM staff was activated by 3.40am. A restart of the systems was initiated at 5.20am. Following complications during the machine restart, at 6.30am we activated our bank-wide disaster recovery command centre. By 8.30am, we knew that our branch and ATM systems could be restored from 10am onwards and therefore did not need to invoke full disaster recovery measures. All other services were progressively restored through the morning and virtually everything was back on track by lunchtime.
On hindsight, our internal escalation process could have been more immediate. We could also have done more to mobilise broadcast channels to inform customers of the disruption in services first thing in the morning. Once again, please accept my apologies and know that we take full responsibility for this incident. There have been valuable lessons learned (---> sounds familiar???) and I assure you that this matter will continue to remain a top priority for me. My colleagues and I are doing everything we can to prevent an incident of this scale from happening again.
Yours sincerely,
Piyush
</TD></TR></TBODY></TABLE></TD></TR></TBODY></TABLE>

It is true that the fault is at IBM's doorstep but the bigger and grandmother fault is when the DBS senior management in its complete folly decided previously to outsource its entire IT operations to an external provider. Now for every additional layer of backup or failsafe measure IBM has to implement, DBS will have to pay through its nose. Worse, it cannot also migrate to another service provider without jeopardising continuity of operations!
 
Knn....typical of chao neh to point finger when things go wrong, but take credit for good results!
 
Everyone in the banking industry knows that you can NEVER say that its a system problem or worst, push the blame to your vendor when you reply to auditors and regulators. All new systems/upgrades are suppose to pass all UAT before signing off and banks are supoose to monitor their systems 24/7. This is a show stopper which should never have happened. If you engage a vendor, its your responsibility to manage them and not push the blame to them esp when the problem occurs in your core system.

This is a typical ah neh who is trying to pull a fast one.
 
First we have the Karate Kid, next we have the Taiji Master...

img_piyushgupta.jpg


Piyush Gupta
Chief Executive Officer
DBS Group Holdings & DBS Bank

Mr Piyush Gupta was appointed Chief Executive Officer of DBS Group Holdings and DBS Bank Ltd on 9 November 2009.

Prior to joining DBS, Piyush was Citigroup's Chief Executive Officer for South East Asia Pacific, covering Australia, New Zealand, Guam and the ASEAN countries - Singapore, Malaysia, Philippines, Indonesia, Thailand, Vietnam and Brunei.

Piyush began his career with Citibank in India in 1982 and over the years, has held various senior management roles across Citi's corporate and consumer banking businesses, including Chief of Staff for Asia Pacific Corporate Bank, Head of Strategic Planning for Emerging Markets and Regional Director for Global Transaction Services for Asia Pacific. He has also served as Citi's Country Officer for Indonesia, Malaysia and Singapore as well as the ASEAN Head of the Institutional Clients Group.

Piyush has served as a member of the Indonesian Government’s Debt Restructuring Committee, Chairman of the Foreign Banks' Association in Indonesia and on the Board of Kuala Lumpur Business Roundtable, as well as on the Boards of AMCHAM Malaysia and Singapore. He is a past Chairman of the Financial Services Committee of the US-ASEAN Business Council, and currently serves on the Group of Experts to the ASEAN Capital Markets Forum.

Married with two children, Piyush has a Bachelor of Arts (Honours) Degree in Economics from St Stephen's College, Delhi University, India and an MBA from IIM, Ahmedabad.
 
Everyone in the banking industry knows that you can NEVER say that its a system problem or worst, push the blame to your vendor when you reply to auditors and regulators. All new systems/upgrades are suppose to pass all UAT before signing off and banks are supoose to monitor their systems 24/7. This is a show stopper which should never have happened. If you engage a vendor, its your responsibility to manage them and not push the blame to them esp when the problem occurs in your core system.

This is a typical ah neh who is trying to pull a fast one.

During my time, of IBM, NCR, DEC, HP..IBM stands for quality & reliablity, we users always joke that NCR stands for Not Completely Reliable System...

I think IBM now stands for Indians Big Mouth...ha ha ha ha...

Dbass should dock a few millions off that ah neh CEO salary or are they going to reward him more...like the norm here?? make grave mistakes & still around to collect fat pay...

Didn't know ah neh also practices Taichi, must have learnt from the best...I thought they do YOGA!;)
 
if anyone of u ever go to tampines interchange, u will see a lot of fat ass foreign trash ah neh having lunch during lunch time. all will eat a big pile of rice and curry!!! after that go back to the office. all employ by DBAss, most probably IT dept. i was thinking, with that kind of heavy lunch. how can u have a clear mind to do the job??? especially IT? everyday chobo lan??? too good life??? my friend work IT, getting 4 to 5k a mth. work so hard!!! mind so sharp and active. but look at those ah neh, all look so retarded and slow!!! no wonder got major cock up!!! hope DBAss kena jialat jialat from MAAss and wake up the fucking idea!!! sack the ah neh CEO and those fat ass IT ah neh!!! get some responsible local IT people to do the job!!!
 
if anyone of u ever go to tampines interchange, u will see a lot of fat ass foreign trash ah neh having lunch during lunch time. all will eat a big pile of rice and curry!!! after that go back to the office. all employ by DBAss, most probably IT dept. i was thinking, with that kind of heavy lunch. how can u have a clear mind to do the job??? especially IT? everyday chobo lan??? too good life??? my friend work IT, getting 4 to 5k a mth. work so hard!!! mind so sharp and active. but look at those ah neh, all look so retarded and slow!!! no wonder got major cock up!!! hope DBAss kena jialat jialat from MAAss and wake up the fucking idea!!! sack the ah neh CEO and those fat ass IT ah neh!!! get some responsible local IT people to do the job!!!

Back in my days at Citi, we always look for the Chinese IT guys as the ah nehs are hopeless. Througout the day, you will always find ah nehs at the pantry. They take tea breaks as and when they feel like it. They will order lunch and put them in the pantry tables to chope seats. the whole farking pantry is full of curry smell and we never get to eat in the pantry. It got so bad that my boss had to built our little pantry at a corner of our dept.

Btw if the bank had outsourced the system to the vendors, the local IT will only be responsible for simple tasks like setting up your PCs, reset password etc. They know absolutely nothing about the system and is totally at the mercy of the vendors once things screw up.
 
if anyone of u ever go to tampines interchange, u will see a lot of fat ass foreign trash ah neh having lunch during lunch time. all will eat a big pile of rice and curry!!! after that go back to the office. all employ by DBAss, most probably IT dept. i was thinking, with that kind of heavy lunch. how can u have a clear mind to do the job??? especially IT? everyday chobo lan??? too good life??? my friend work IT, getting 4 to 5k a mth. work so hard!!! mind so sharp and active. but look at those ah neh, all look so retarded and slow!!! no wonder got major cock up!!! hope DBAss kena jialat jialat from MAAss and wake up the fucking idea!!! sack the ah neh CEO and those fat ass IT ah neh!!! get some responsible local IT people to do the job!!!

This is what happens when u wan to get nehs who can talk but cannot do work!
 
Back in my days at Citi, we always look for the Chinese IT guys as the ah nehs are hopeless. Througout the day, you will always find ah nehs at the pantry. They take tea breaks as and when they feel like it. They will order lunch and put them in the pantry tables to chope seats. the whole farking pantry is full of curry smell and we never get to eat in the pantry. It got so bad that my boss had to built our little pantry at a corner of our dept.

Btw if the bank had outsourced the system to the vendors, the local IT will only be responsible for simple tasks like setting up your PCs, reset password etc. They know absolutely nothing about the system and is totally at the mercy of the vendors once things screw up.

bro, thanks for the clarification. but how come DBAss purposely set up a branch within HR dept to settle PR and relocation of staff in sg??? of cos mostly useless and hopeless ah neh IT fella. why DBAss need so many IT ah neh for what???
 
........DBS will have to pay through its nose.....!

Agree with all your points except for DBS having to pay more.
Its the customers who have to pay,not DBS.
Just wait for the next round of increase in charges.

This is something uniquely Singapore-just look at recent increase in electricity prices (due to higher admin cost in PUB), increase in MRT/bus fares (although trains are more packed),the more flats HDB sell, the bigger its losses ect2.

Pay more for poorer service.

Yeah, worker have to be cheaper and faster BUT boss better profit..
 
The beauty of outsourcing, when cock up happened - point finger at them.
 
bro, thanks for the clarification. but how come DBAss purposely set up a branch within HR dept to settle PR and relocation of staff in sg??? of cos mostly useless and hopeless ah neh IT fella. why DBAss need so many IT ah neh for what???

For this question, you have to ask the ah neh CEO as to why he needs to import when local poly grads would have done a better job. I have been pondering over this for donkey years.........
 
This will be a classic IT management case-study in textbooks. Yes IBM fucked up the maintenance job. Then they fucked up even more when trying to repair it.

But the real blame should be on DBS management for not activating the IT disaster recovery to resume operations. The bank has probably spent millions of dollars building contingency plans. But what's the use if they don't make the timely decision to use it?

It's like carrying a condom in your wallet but not using it when you go call chicken at Geylang. After you kenna Aids, do you blame Durex?
 
Back in my days at Citi, we always look for the Chinese IT guys as the ah nehs are hopeless. Througout the day, you will always find ah nehs at the pantry. They take tea breaks as and when they feel like it. They will order lunch and put them in the pantry tables to chope seats. the whole farking pantry is full of curry smell and we never get to eat in the pantry. It got so bad that my boss had to built our little pantry at a corner of our dept.

Btw if the bank had outsourced the system to the vendors, the local IT will only be responsible for simple tasks like setting up your PCs, reset password etc. They know absolutely nothing about the system and is totally at the mercy of the vendors once things screw up.

ha ha ha ha I used to know where Citi pantry was when it was along Shenton Way near to the Old Conference Hall, when I was needing a coffee along that stretch of road, instaed of heading for the overhead bridge where there are shops...I head for the pantry...make my own coffee...with creamer & sugar...no question asked...security was so easy...

that was before it became an Ah neh bank......:D
 
This will be a classic IT management case-study in textbooks. Yes IBM fucked up the maintenance job. Then they fucked up even more when trying to repair it.

But the real blame should be on DBS management for not activating the IT disaster recovery to resume operations. The bank has probably spent millions of dollars building contingency plans. But what's the use if they don't make the timely decision to use it?

It's like carrying a condom in your wallet but not using it when you go call chicken at Geylang. After you kenna Aids, do you blame Durex?

I had been caught in the blame the vendor situation before, when the back up , which was supposedly checked on a Sunday, when transaction is low, and system failed did not work...had to spend the next week, clocking more than 15 hrs work ( plus OT etc) to help restoring system..

That was the transition from in-house to ous source & from locals to the ah nehs...
 
Back
Top