So, if you are like me and having to deal with a very large environment, you probably feel like you just finished patching with January Critical Patch Update. It is April already and the April CPU was released last week. However, since we all have our plan and process in place, it is a piece of cake, right? OK, so we might not all have a complete process in place, and some of this seems that we are just constantly patching databases, but maintaining a secured environment is important.
In reviewing the release notes, there are some important patches to apply, there are new exploits on the database side. The affected components are listed in the documentation as well, allowing for focus in these areas for testing and validation and not having to worry about the other areas. This is also beneficial if when installing Oracle only components are installed that are used, the patches can still be applied, but testing would probably be made very simple at that point if there are is only one or two components that are affected.
Having a policy from the security team in place has really helped with deployment of patching. It isn't just the DBAs saying we need to patch, but overall security policy requiring us to. This has additional support for testing and getting the needed downtime windows. Overall security patching also helps for coordination of the different level of patching from OS to application layers. Exceptions are then required from any application team not able to allow the patching, which will then push back on vendors of these applications, and I believe getting them to work on developing standards around patching and security fixes. I think that this would even help with overall security posture of these systems.
So, policies, processes and patching all good things for those of us supporting these important business applications and environments.
Wednesday, April 15, 2009
Monday, April 13, 2009
Backup Strategies
I really should say recovery strategies instead of backup strategies. Every time I setup a new database or learn about what an application really does, in the back of my mind I am wondering if something were to happen to this database is the current recovery strategy going to work? Sure I can use RMAN and even exports to take backups of the system. I can also verify that backups run every night and the tapes are good, but is the application going to be in a state that I can recover it and is it really going to be as simple as recover database.
In moving to even a more high available system with RAC, I wonder if that because you can failover to another node backup strategies might not be considered as important. But there are so many other things that can go wrong. What if a security patch isn't applied correctly or a hotfix for the application is rolled out and results in a table are incorrect because of it? Or even better, because you and I know that there are places for ad-hoc queries in applications, and someone runs and update or changes a table structure, what is going to be the best way to recover now?
I think that the best thought out backup strategies are ones that include these thoughts and considerations. Thinking of the end result of actually recovering a database can give insight to what needs to be backed up and how frequently. Also the understanding of what pieces might be the most important and customized. In a large environment it is very difficult to implement several different strategies, but at least considering if I have RMAN, flashback and exports implemented, which one am I going to use first to recover. Can I just flashback a query or a table and how big does that flashback area really need to be to provide what I need to be able to get it back quickly. Import might take too long to run, but can I use that information in a test database to reconstruct what is needed to not have the production system down. With the high availbility can I failover quickly, or do I have a place to run a restore from RMAN in a real disaster?
So, think recovery and think what things are in place to restore a database, and if you want to even have more discussions about this, join me at Collaborate09 - IOUG Forum, which will be a great place to discuss recovery techniques as well as learn other things near and dear to Oracle technology professionals.
In moving to even a more high available system with RAC, I wonder if that because you can failover to another node backup strategies might not be considered as important. But there are so many other things that can go wrong. What if a security patch isn't applied correctly or a hotfix for the application is rolled out and results in a table are incorrect because of it? Or even better, because you and I know that there are places for ad-hoc queries in applications, and someone runs and update or changes a table structure, what is going to be the best way to recover now?
I think that the best thought out backup strategies are ones that include these thoughts and considerations. Thinking of the end result of actually recovering a database can give insight to what needs to be backed up and how frequently. Also the understanding of what pieces might be the most important and customized. In a large environment it is very difficult to implement several different strategies, but at least considering if I have RMAN, flashback and exports implemented, which one am I going to use first to recover. Can I just flashback a query or a table and how big does that flashback area really need to be to provide what I need to be able to get it back quickly. Import might take too long to run, but can I use that information in a test database to reconstruct what is needed to not have the production system down. With the high availbility can I failover quickly, or do I have a place to run a restore from RMAN in a real disaster?
So, think recovery and think what things are in place to restore a database, and if you want to even have more discussions about this, join me at Collaborate09 - IOUG Forum, which will be a great place to discuss recovery techniques as well as learn other things near and dear to Oracle technology professionals.
Subscribe to:
Posts (Atom)