Oracle's Management Hosts Managing Oracle Database 12c Conference Call (Transcript)

Seeking Alpha

Oracle Corporation (ORCL) Managing Oracle Database 12c Conference Call August 22, 2013 1:00 PM ET

Executives

Scott McNeil - Principal Product Marketing Director

Mughees Minhas - Vice President, Product Management for Oracle Database Manageability

Jagan Athreya - Senior Director of Product Management

Analysts

Scott McNeil

Hello and welcome to this Oracle webcast. I am Scott McNeil. Today, we will be discussing all the new and exciting capabilities for managing Oracle Database 12c using Enterprise Manager. With this latest Database 12c, we provide a ton of new features and capabilities that not only let you get the job done faster, but also make it easier to do and with less effort.

With me is Mughees Minhas, Vice President of Product Management for Oracle Database Manageability. Welcome, Mughees.

Mughees Minhas

Thank you, Scott.

Scott McNeil

In this session, we will be taking a look at some of the new features for managing database performance, consolidation, service levels and looking at ways for delivering self service IT through the cloud. At the end of the webcast, we will be doing a Q&A with our product experts. So if you have a question, please feel free to enter into the console and we will try to get to as many as we can. Now with that in mind, I would like to turn it over to Mughees, who will take us through the presentation.

Mughees Minhas

Thank you, again, Scott. So as you can see, what I have done is, I have divided the presentation into four sections. So we will talk about talking about embedded management because that continues to be key focus areas for us to make sure that along with the database embed manageability with that. Then, we will talk about some of the enhancements we have in the area of consolidation, quality of service management and self-service IT.

The reason I pick these three areas is because that is where the current trend is. There is a lot of focus of consolidation. There is a lot of focus on quality of service. It continues to be a focus. It has been a focus in the past and it continues to be a focus. Surely, no presentation can be complete unless we talk about how we enable our users to deploy databases in private and public clouds. So we have a done a number of enhancements in that area as well and I would like to outline those.

So let me begin by talking about embedded management and what we have in this area. So the first thing I would like to say is that we have a new product called Oracle Enterprise Manager Database Express 12c. It's a mouthful. For short, we refer it to as EM Express. This is an integrated management tool that comes with each database. So it is completely, its life cycle of this product, its installation, its deployment is completely tied to the database itself. As soon as you bring up the Database 12c, you have Enterprise Manager Express along with that.

Some things to mention about EM Express in terms of how its different than Database Control. For those of you who have been using Oracle Database for a while would know that. In the past, we used to have Enterprise Manager Database Control. So instead of that, we are replacing it with Enterprise Manager Express. So, a few things to note about EM Express.

First is that, it is installed, its completely integrated with the database itself. So it is preconfigured, installed with the database. It runs inside the database. There is no process outside the database that's managing it. There are no extra middleware component. It is a web tool. This is a tool you bring up in the browser. So it is a web tool. But it has no external middleware components. So it uses XDB inside the database to service mid-tier and it supports both single instance databases, rack databases and of course Enterprise Edition databases.

It has a very small footprint. This footprint is only 20 megabytes. The runtime footprint in terms of memory and CPU consumption is zero when it is idle. So if you have the EM Express running but you are not using it actively, it has no runtime overhead on you. The only overhead it actually you will see, will be on the client side and on the database side, it basically only serves SQL. So however many resources the SQL consumes that is used to render information on your web browser, that is the overhead, but in idle situation the overhead is zero.

As I also just mentioned that all of the rendering is done in the browser. So the only impact on the server is running SQL. All the UI elements and all of those rendering is done in the client-side in the browser. So they will have no impact on the database server.

Now in terms of the capabilities, it supports. We will try to be as comprehensive as we can. So we have basic support in this first release of EM Express for storage management, for security management and for configuration management but for performance diagnostics and tuning we have pretty much everything that you could get from Enterprise Manager Cloud Control, you can get from EM Express as well.

So this is what the page looks like. So we have tried to maintain to the extent possible similarity with that EM Cloud Control. But of course because the stack is difference, it is a different product there will be some differences. The other thing is, we have tried to make sure that the interface is very intuitive and it's very easy to use and to navigate through this product.

In terms of the capabilities, we have a configuration management capabilities where you can manipulate any of the parameters. You can manipulate memory parameters, memory settings. You could look at database feature usage. I am not sure how many people know but since Oracle 10g, really, we have had the ability to see what features inside the databases are being used. So you can see that as well and also what the current database properties are.

For the storage, you can manage your tablespaces. Manage space inside the tablespaces. You can undo tablespace. You can again setting for undo management, the parameters for those. The mode of retention you keep, you can manage it. Redo logs, archive logs and control files.

In the area of security, you can manage your users, your roles, your privileges. You can assign them to users and so on and so forth.

Finally, in the area of performance, we have something new which is called Performance Hub and I will talk about that in a little bit down the road in the presentation. Also you can do SQL Tuning Advisor. Basically, all the capabilities that you have been used to in the past with Enterprise Manager Cloud Control or Grid Control before that. Most of those really are available through EM Express as well.

Just a few words, I mentioned something about the architecture but I just wanted to highlight architecturally how it is different than Cloud Control or even Database Control. So as you can see, everything in EM Express runs inside the database. So you see this box there, the red boundaries. So the EM Express Servlet is running inside the Oracle database and as a user make a request that goes through the list or it connects to the database and then the SQL then is served by EM Express Servlet, then generates the SQL and runs it inside the database.

All the rendering is done on the browser which you see on the right side. And so all kinds of UI element and the graphs that show up, all that rendering is done inside the browser. So the database is not impacted by it.

In the EM Express Servlet, so that's the thing you will see running inside additional process that's running, but that said, the overhead of this process is zero when the EM Express is idle if you are not using it. So it does the authentication, validation. It serves the requests by executing the queries inside the database and it writes the output to the response stream which is then shown inside the browser. So that's all I am going to say about EM Express and like you to, if you deploy Oracle Database 12c, to play with it and give us your feedback.

Next, let's talk about consolidation. For some time now, we have really had full support for consolidation and of course we have enhanced that for Database 12c. But we have the ability to, where you can do consolidation planning. We introduced a feature called Consolidation Planner or Consolidation Advisor back in Enterprise Manager 12c. So its roughly a year and half, two years ago, which helps you identify the right candidates for consolidation.

Our testing tools are real application testing. which allows you to then validate the recommendation that's coming out of the Consolidation Planner to make sure you can consolidate all those databases inside on the particular server. What we have done is, in order to support schema level consolidation, we have introduced what we now call Real Consolidation Testing, or really consolidate where you can now test if you are going to do consolidation using pluggable databases or a schema level consolidation. We now support that as well and I will talk about that in more detail.

For migrating to consolidation environments, so once you have validated your consolidation plan, then you actually have to move those databases into this consolidated environment and we support Oracle pluggable database, it gives you that platform through which you can actually migrate all these singleton databases to a new consolidated pluggable architecture.

Finally, ability to manage and tune these databases that are in a consolidated environment and if you are using a container database or pluggable database, Oracle Multitenant option, then how do you do diagnostics and tuning in this new environment and the challenges that it introduces.

So we have support. We had the basic elements of this for some time, but we have made enhancements in a number of these with respect to Database 12c and now I will talk about the enhancements that we have done.

So the first thing I will talk about really is the fact that we can manage your container databases through Enterprise Manager 12c. As you know, in this new Oracle Multitenant architecture, you have a container database and inside that you have pluggable databases and each of these pluggable database has its own schema, it's users, roles, services. It has its own data dictionary. It has its own tablespaces.

Then you have this notion of a new kind of a DBA which we call a CDBA. He is the guy or a gal who manages the entire container database and all the pluggable databases within it. You also have an additional kind of role which is new in this release which is what we call a PDBA. So this is the guy or gal who is going to manage the individual pluggable databases within a container database.

So if you have an environment where you have a container database with six pluggable databases, and it so happens that those six pluggable database are managed by two or three different DBAs then you have a new kind of a role, a PDBA role that you are going to manage things inside your pluggable database and you need to have access and control to things within your pluggable container, but not outside of it.

So now you know, we used to have the notion of a DBA but now with this new architecture we have the notion of a CDBA which is a container database DBA and a PDBA which is your pluggable database DBA.

We support the migration to this new multitenant model. We support two methods. If you want to go from a singleton database to this new architecture of Oracle Multitenant, the first one is called Plug as a PDB Method. So this is the fastest way to actually convert your database into a pluggable architecture. We support that in Enterprise Manager 12c. This basically is really as fast as copying files, because that's essentially how it works. The restriction, of course, is that you can only use this Plug as a PDB Method if the database is already on Database 12c. So it only works off that release.

Now if you have older releases, which I presume most of you do, then you have to use the Data Pump Method. So we support that as well. Or if you have a combination. You have mix of 12 databases and some 11g databases, and you want to move all them to a pluggable architecture, you can use the Data Pump Method because it supports not only 12c version of the database but also 11.2.0.3 version of the database and plus.

So we give you the ability to simply, if you want to move to this new consolidated architecture, we give you the ability, within a product, to basically migrate it to this new architecture using both of these methods depending on which one is more appropriate.

Now I will come back to the thing which I talked about a couple of slides ago about, we have a new role called a PDBA and then we have role of a CDBA. So if you are inside Oracle Enterprise Manager, what can a CDBA do and what can a PDBA do. So basically, the person who is managing a PDB or a pluggable database via restricted access, basically to four areas, storage management, users and role management or security management, SQL and session management and resource management.

Okay, so a PDBA will not be able to gain system level parameters. It will be not be able to do things like backup and recovery. You will not be able to things like migration, et cetera. The tools are only available to a CDBA. So people who have access to the whole container and a person who has access to the whole container or who is the CDBA can do everything that a PDBA does and all the other things also that you see in grey block like administration of the entire container migrations, system level management, system level diagnostics, system level tuning, performance tuning and backup and recovery and so on. So there is some segregation or separation of duties between these roles in a product.

Next, I wanted to just mention briefly the fact that we can manage the full lifecycle of the Oracle Multitenant Database, if that is the architecture that you have deployed to databases. So you can migrate a regular database or a singleton database which could be either database 12c or 11g or 10g, as I mentioned earlier to this database 12c multitenant architecture. You can create, plug new PDBs from a CDB or you can unplug existing PDBs. You can clone a PDB either from a gold copy in the EM software library or you can just point to a reference target and we can clone a PDB from that as well.

Finally, I think something that's not represented here but worth mentioning is that the rich configuration management capabilities of Enterprise Manager become even more irrelevant in the context of multitenant databases, because they can help in tracking and preventing sprawls and also easily providing association driven impact analysis. So, for example, if you want to find out what PDBs will be effected if the container database is taken down for patching, we can provide you that information.

Now, so you know, we just discussed what we have for managing of Oracle Multitenant now the question, coming back to the consolidation story, is how do you are your consolidation environment? So, as you know, in real application testing, we have two features, one is called SQL Performance Analyzer and one is called Database Replay.

Now, in SQL Performance Analyzer, you can basically, let's say you are going to consolidate four different databases into this new architecture, sales database, HR database, ERP and CRM database. We allow you to capture your SQL Tuning Set from each of the databases, sales, HR, ERP and CRM and replay them concurrently on your consolidated database. So you can then figure out exactly what will this MySQL, how would my SQL response time change when I go from non-consolidated architecture to a consolidated database environment.

If there are any kind of regressions, if there are any kind of slowdown, et cetera, the SQL Performance Analyzer will identify those regressions and then allows you to fix those regressions before you actually take your application production in this new consolidated database. So that's what we have, how you can use SQL Performance Analyzer.

Now what we have done is, for Database Replay, the issue was more challenging. As you very well know, I am sure, that SQL Performance Analyzer is for response time testing whereas Database Replay is for throughput testing. So if you want to know how would my throughput change if I am going to have, instead of running one database at a time, I am going to have four databases consolidated in a single database, this new pluggable database and how will my throughput be, what will be my number of transactions that I can do in a minute and so on and forth.

So here what we have done is we have enhanced Database Replay to where you can now replay your workload from each of those databases concurrently inside a single container database. So now we will capture your workload from your sales, HR, ERP, CRM database and then you can replay them all at the same time. So then you exactly what the behavior of my system will be if I have these four different workflows running inside my container database at the same time because as you well know, in the container database architecture, you still have the same background process that are being shared and the memory is being shared. So you won't understand the impact of this sharing and so thin enhancement in Database Replay allows you to identify those things.

I would also like to mention that we have backported this feature of Consolidated Database Replay back to databases 11.2.0.2 and 11.2.0.3. So as long as you are on in a version of database 11.2.0.2 onwards, you can use it and certainly, I think, it will be very handy if you are using it in Database 12c.

Some other things that are, I think, worthy of mentioning with respect to database replay in general is the fact that we now have really what I refer to as smart capacity planning. So let's say you decide to consolidate four databases in a single database, and now the next thing is, you expect the workload for one or two of these databases to grow over time. So not only when you are doing testing, you want to know what kind of performance will I get if I consolidate the whole bunch of databases in a container database.

But as the workload grows in these databases, can my system could handle, say if you are interested in doing capacity planning, how do you do that? So we support different models for that. So the first thing we support is what we call time shifting. So here we can take multiple workloads. Once again, I use the same examples, sales, HR, ERP and we can align their peaks together. So now you will know that if I am going to consolidate what will my system behavior will be if all my applications reach their peak at the same time. So this in one way of doing capacity planning. So you can get an understanding of the application behavior, the database behavior under maximum load.

The next thing we have done is, we call this workload folding. So workload folding is used for testing what would happen to my system if my workload doubles. So the way the workload folding works is that and this is what I am trying to show on the slide is, we take a particular workload, we slice it up. So let's say you want to double it. So you will slice it in two parts and then we basically fold these parts together, so we replay both of these parts at the same time.

So if you were running, let's say, 100,000 calls in the entire capture, we are going to split this up, say the first part will play 50,000 calls and the second part of 50,000 calls will be replayed along with the first part. So if you had run 100,000 calls in an hour, now you will get 100,000 calls in half an hour. So we have doubled the workload because you have increased your throughput by twice.

You could split this up into four parts and then you could do 4x container. You will be testing for 4x workload or 8x, whatever the number might be. So this workload folding allows you to test what would happen if my workload increases, doubles or triples or whatever happens and you can just basically split this up into as many parts and replay those pieces at the same time.

The third method we support is what we call schema duplication. Here you can, you know, and there are pros and cons of when you would use workload folding, when you would use schema duplication. In this overview presentation, it is not the proper time for me to describe the pros and cons of each and which is more appropriate when, but schema duplication is basically where you are going to create another schema for the same HR database and then you will replay the workload in, let's say, the HR database schema one and HR database schema two at the same time. So once again, your container databases are seeing two extra workloads, although its going against two different schemas. So that's another way of increasing your workload capacity.

So these are the different ways in which we support workload scale up tests. If you know, the old method of just scaling up your query workload that has been there in real application testing since Oracle Database 11g continues to be there. So if you include that, then you really have four ways in how you can do capacity planning, in general, or if you are going to move to consolidated database.

Since we are talking about testing, so probably the right thing to talk about data masking as well. We have enhanced our data masking solution and we now support something called At-Source Data Masking. So what At-Source Data Masking allows you to do, is that, in our prior releases when doing data masking, production data was first copied to a staged or nonproduction environment for the restricted use of obfuscating sensitive data before sharing it with nonproduction users.

Now this method allowed sensitive data to be vulnerable until it was masked because you had this in a staging environment for some time before it was masked and it also required storage resources up to as much as that of the production database. While with At-Source masking, sensitive data never leaves production environment unmasked and as the data is being read out of production, before it hits the storage, the data is masked on-the-fly before it hits the disk.

This use of the data pump method where we read it out of production and we create data pump files but as soon as the files are created the data inside them is already masked. So this gives you really the most secure form of masking because this ensures that sensitive information never leaves production unmasked. For those businesses that have very high or very strict security requirements like banks and healthcare institutions, et cetera, this is something that will be of interest to them.

We have also done some other optimizations. Here we have combined our data subsetting and data masking solution so that, let's say, you are interested in doing both. You want to first subset your production database, may be it is one terabyte and you want to subset to a smaller size so that you can do some developing and testing.

So you would typically subset first and once you have done the subset then you want to mask it to make sure once again that sensitive information is not being exposed to developers. In the past, you had to do it in two different steps. Now we have combined them. So can do a one step where we do subset and mask at the same time, and that improves your efficiency, if you are planning on using both these features together. It also gives you the maximum compliance with PCI as well.

Here I just want to you some benchmark information we have around data masking and subsetting. So we masked 600 billion row table in 33 minutes. Now I want to mention that this was done on Exadata X2-2 Full Rack. We were also able to subset 1% of 100 terabyte table in 6.5 hours. So you get basically just one terabyte, going down from 100 terabyte to one terabyte. We were also able to subset and mask this combo thing that I referred to earlier of a 72 terabyte table in little less than six hours basically. So these examples on how performance of our data masking and subsetting solutions are that you can mask a large volumes of data and subset very large volumes of data in a short amount of time.

Okay that brings me to the conclusion of the second section which was consolidation. Third, we will talk about quality of service because that continues to be a key area of interest for all our DBAs and users.

So we have had this, almost joking I refer to this thing called Addams Family. So we have had this feature called ADDM or Adam, as we refer to, since Oracle Database 10g. When we introduced this in 10g, this feature, ADDM does automatic performance diagnostics and makes recommendations and how you can fix problems, fix bottlenecks in your system. This flavor of ADDM which was introduced in 10g is ideal for diagnosing persistent or systemic performance issues. Issues that impact the whole database. It's very good at identifying them and giving you recommendations on how you can fix those particular problem. It is based on AWR data analysis and runs every hour. Basically, basically every time you do in a different Adam like that. So we have had this for many years now.

Now we introduced in Enterprise Manager 12c an enhancement of this and we called this Compare Period ADDM. So this is where we help you answer questions like why is the performance slow in my system today compared to yesterday. So this flavor of ADDM does comparative performance analysis and tells you why the performance is different. Maybe you are running extra workload, or maybe you have changed some configuration parameters or maybe you have had some execution plan changes, et cetera.

It could be a number of reasons why the performance is different and Compare Period ADDM tells you exactly why the performance is different from a base period that you are comparing it to. So this was introduced about a year and a half, two years ago when Enterprise Manager 12c came out.

Around the same time, we introduced another flavor of ADDM and we called this Real-Time ADDM and this was very particularly useful for identifying unresponsive databases. If your database is hung or extremely slow it so slow that even a DBA cannot connect to it to identify the problem, so Real-Time ADDM has this ability where it can, using the EM agent, it can connect, it can create a diagnostic connection and a diagnostic connection is our secret sauce here which allows us to connect to a database even while it is hung and you cannot do a normal JDBC connection.

So it connects to it in this diagnostic mode, and then it allows you to diagnose the problems. It will do some analysis and give you a recommendation if somebody is causing a hang, it will do a hang analysis and if there is some blocking session, it will identify those and so there are things you can do to fix the problem. It pulls that data out, presents it to you and you can then take action. So Real-Time ADDM when it was introduced back in Enterprise Manager 12s, the goal really was to help you diagnose problems with unresponsive databases such that these were databases to which you cannot even connect through the normal way. Real-Time ADDM allowed you to solve that problem.

So now with Database 12c, we have done a further enhancement to this Real-Time ADDM and this is an enhanced version of Real-Time ADDM and the distinguishing feature of this aspect of ADDM is that it is proactive but it really is designed to identify transient and short-term problems, performance spikes, in other words. Whereas our first ADDM was good for doing a systemic performance analysis, if I have a problem that is impacting the whole system, and the problem is therefore, in some period of time, ADDM will get that.

But if you have a short spike that is only impacting maybe a few users or few modules, your regular ADDM may not detect that because, as far as the whole database is concerned, things are not that much different but this Real-Time ADDM is really very good for these kinds of spikes in performance and transient issues where it can detect them because it runs every three seconds automatically and then if the problem lasts for at least three seconds, it will catch it. If not, if it lasts less than three seconds, then maybe your don't care about that problem.

So that's what the new Enhanced Real-Time ADDM is and its proactive performance inspection and analysis is very likely to check, as I just mentioned, it wakes up every three seconds, and it looks for certain triggers to see is there a problem in the system and if none of those conditions are met, then it goes back to sleep again. But if any of its conditions are met, where just asking the database, is their problem, if the answer is yes, then it runs some more detailed analysis and analyses high CPU, I/O spikes, memory issues, interconnect problems, hangs, deadlocks, et cetera. If you are seeing a problem, you can always manually trigger this analysis as well.

These are the triggering conditions. So when any of these things happen and these are the set of nine conditions here that I have highlighted. This covers, I think, for the most part, most of the triggering conditions for Real-Time ADDM, although we will continue to enhance it and increase them when they are release, but things like high load CPU. If I have an average number of active sessions is greater than three times the number of CPU cores, it will run the analysis.

Of course, we are also very smart to know that because it wakes up every three seconds, if the problem persists, it is not going to keep doing this analysis every three seconds, because it has what we call flood control. So if an analysis has already happened, (inaudible) it is smart enough to know that the condition is no different, so I am not going to another analysis and waste valuable system resources to generate a report that will look identical to the last report. So it has flood control mechanism so as not to just keep doing this analysis and generating analysis reports for no good reason.

So, high loads, it looks at those I/O bound that might impact on active sessions based on single block read performance. It looks if the system is CPU bound, if my memory is over allocated, if I am using more than 95% of physical memory, et cetera. So it has these triggering conditions. Now these triggering conditions are cannot be concluded by the end user. These are basically defined by us, in product development and they are static, but as I said, we will continue to modify them over time. But they are user controlled.

So that's the enhancement to Real-Time ADDM that we have done in 12c. Another thing that we have done is really very interesting from the point of view of managing complex database operations. So, you know, we have this feature called Real-Time SQL Monitoring which allows you to see what's happening when a SWL is running. So this basically gives you an insight into a particular SQL entry or SQL box. So you know exactly what the SQL is doing, what part of the execution plan is on, how long it will take before it finishes.

What's new in Database 12c is that now you can run the same thing on composite SQL or a PL/SQL.. So if you are interested in knowing how my entire batch job is performing not just a particular SQL but the entire batch job which would consist of many SQL into your SQL, now you can monitor the whole thing as a single entity, what we call a database operation. So what Real-Time SQL monitoring did for a single SQL, Database Operations Monitoring does it for a group of SQLs and if you want to do session tuning, if you want to know if a particular session that you want to instead of using SQL*Trace and antique process old method, you can use Database Operations Monitoring.

First of all, because it is far more the way it displays information is far more intuitive. It is easy to use. Secondly, also because it doesn't have the overhead that SQL*Trace has because this thing is on all the time. It has minimal overhead just like Real-Time SQL monitoring has minimal overhead because the technology underlying this is the same.

Then we give you the ability to tag the particular session. So in order for us to monitor an operation, we need to know the beginning and endpoint of that session or thing that you have to monitor and we give you different methods by which you can tag it. You could use PL/SQL, OCI or JDBC to tag when we should start monitoring and when we should end monitoring a particular operation. By the way, the Oracle Data Pump jobs are automatically monitored anytime you run it because that's a well known operation for us. so we monitor it automatically.

Here is an example of what the Database Operations Monitoring report looks like. So here you can actually see, I hope, that in the graph you see an operation, the different colors represent different SQL and the amount of, the size of the band reflects the database time consumed by each. So you can clearly see that in this particular operation, the longest running operation is the one in yellow which is this particular SQL which is on the top right of the legend there.

So if you are interested in tuning this particular database operation you will really focus on this particular SQL. So in the actual product you can click on this and then it drills down into that actual SQL and tells you why this SQL is taking long. It shows you the Real-Time SQL monitoring view of it.

One thing you can see right of the bat, of course is that the longest running SQL here, the yellow one is serialized whereas all the other ones seem to be parallelized because they are running, I think, at least eight-way parallel whereas the yellow one is not. So the first question that came to my mind when I looked at this was, why is the yellow SQL not running in parallel whereas everything else in this particular operation was being parallelized. So there could be other reasons too but I am just that as a example of something that you can quickly see what the problem might be in this particular operation.

Another enhancement that's worth mentioning, we now automatically persist our Real-Time SQL monitoring and Real-Time ADDM reports in AWR. So in the past, they were in memory. So when things got flushed out, the reports were no longer there. Now we put them in AWR. They have the same retention as AWR. So you can always go back and see the problem happen in the past.

Let's say, you come to work on a Monday. Somebody complained of a problem on Saturday or Sunday, you will have the Real-Time ADDM report in AWR which you can go back and see what happened in the system. So the goal really here is that the first time a problem happens, there is enough information that we store inside the database so that when you come back a day or two or three later, you have enough information to diagnose the problem and fix it before it happens again. You never should have to say, okay, next time it happens, call me and I will trace something. We trace automatically and we store the information so its always accessible to you.

Database Performance Hub, so this enhancement really, inside EM Express, is a different way of laying out all the different performance information. So I just mentioned recently a few minutes ago, the four flavors of ADDM. ADDM, there is Compare Period ADDM, Real-Time ADDM, enhancement to that. We talked about database operations monitoring. We have SQL Tuning Advisor. We have SQL Performance Analyzer. We have SQL Access Analyzer. We have various performance tuning tools and the question is when do you use one which is the most appropriate.

So what we have done is really is that we have rationalized in the performance hub the appropriate tools for each kind of a problem and it really is a different way of weighing out the different performance solutions we have for that kind of problem to solve. So, as an end-user, you will have to decide upfront that, Oh, I am going to use the ADDM tool for this problem or I am going to use Real-Time ADDM or I am going to use Real-Time ADDM or I am going to use SQL Monitoring.

The take first thing this does is shows you the data and then depending on performance data and depending on the kind of problems you see, you can then drill down and use the appropriate tool for that. So Performance Hub really is a way of us to present our different performance solutions in a more coherent way, in a more systematic way so that you can make the right choices when you are trying to utilize them in identifying and diagnosing performance problems inside the database.

Another thing that’s new in Enterprise Manager 12c that I would like to mention is the Change Activity Planner. So Enterprise Manager 12c has traditionally been used for executing labor intensive tasks such as patching. But what will be noticed is that for a large number of our Enterprise customer base, these tasks happen over long period of time and involves multiple administrators. For example, a CPU patching cycle over hundreds of databases typically runs over a few months and involves security officers, lead DBAs, and application DBAs.

In order to facilitate the planning, execution and tracking of these processes, Enterprise Manager 12c introduces a new feature which is called Change Activity Planner. Though initially intended for activities like patching, the Change Activity Planner workflow can be used to track any long-running activity such as compliant rollout or compliance rule rollout or a major version upgrade across multiple databases. Using the Change Activity Planner, managers can create change activity plans for various projects. They can allocate resources, targets and groups affected by it.

Upon the activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. So DBAs can identify their task and understand the context schedules and priorities. They can complete tasks using Enterprise Manager Cloud Control automating features, of just deployment procedures or in some case may use manual methods outside the Enterprise Manager. Upon completion, the compliance is evaluated for validations and updates the status of the tasks and the plans. So that’s the new Change Activity Planner.

So these are the things that you can now find in Enterprise Manager that really facilitate your management of Database 12c from a quality of service perspective and lastly, I will talk about what have we introduced since with Database 12c to enable self-service IT or more cloud services to enable self servicing IT.

This is something I am sure you know that. If you are looking at consolidation, the maximum ROI you get from consolidation is a schema level consolidation. You can do a consolidation at the consolidation at the server level which gives you the smallest consolidation ratio and the maximum consolidation ratio you get is schema. Now, the higher the consolidation ratio, the less isolation there is. So, as anything else, there are pros and cons on which one you would want to adopt.

But the bottom line is what we see among our customer is, they do things at a server level, they are doing things at OS level, database level and schema level. The good news from our side is that we support all of them. So we have Infrastructure-as-a-Service solution in Enterprise Manager that allows you to provision VMs for each database, if you support shared servers consolidation and if you have a shared OS cluster or shared database, if you are doing schema level or instance level, this is where you are sharing, in the case of database instances of servers, you are sharing a database, ORACLE_HOME, where multiple instances are going to be based off of that. We support that.

The newest thing is Schema-as-a-Service, so where we can offer you a new schema within the database if it is requested by user. Of course the more towards the right, the higher the consolidation ration you get.

So, as I mentioned, we have Infrastructure-as-a-Service, we have Database-as-a-Service within that. We have Database-Instance-as-a-Service. We also have now Schema-as-a-Service. For all of these, it is self-service IT for a private cloud where an end-user can go. They can request a database and if the way you have configured the cloud admin, the way they have configured whatever the database means, it could mean an instance, and he gets an instance or if you have defined the database to mean a schema, the person will get the schema or you may want to offer both of them depending on the kind of use somebody has and we can have quotas and give them privileges and how much resources they can use, et cetera, so that people don’t request things beyond what they have been quota for.

We also have metering and chargeback for all these things. So at the end of the day, you know exactly who was consuming how much compute resources on your system. Because if you are running a private cloud, you really want to know who is using what. So we have complete metering and chargeback capabilities to hold your IT accountable for the resources being consumed.

So the different flavors that we do support in the context of Database-as-a-Service, so you can get a dedicated database instance for a new project. I mentioned earlier, we have had this for some time. What's new is now we have something called Database Instance Cloning using Copy-on-Write technology. So this is the way you want to create a database for things like functional testing, for example. So here we are able to clone a large database literally within a matter of minutes using Copy-on-Write technology.

There we make a clone but basically we are not copying the actual storage block. They still point to the same storage blocks as the original master copy is and only when your data diverges from the master, it’s a separate copy. So you can literally make a copy of 100 terabyte database and the copy would only take a few gigabytes and it will take a few minutes to provision. So it’s a very efficient way of creating many different copies of a database that consumes a small amount of storage space and you can do it very quickly.

So if your goal is to provision something with development and testing purposes functionally for example, this is a very appropriate technology and very useful in that regard. This has been introduced recently and it supports Database 12c.

We have also introduced ability to clone a full copy of the database using RMAN Backup. So this is another flavor of Database-as-a-Service that we support and this would be more appropriate for load testing as opposed to a functional testing. You don’t want to do functional testing on a snap clone or a Copy-on-Write because anytime you are making a change you are writing it. It has an overhead because then you have to make a copy and then write it. So it is relatively slow, if you are going to do performance testing on a snap clone but for those kind of purposes, you want a full clone and RMAN Backup is more appropriate.

Lastly, Schema-as-a-Service. So this is where you just a developer, all they care about is a particular schema they don’t want a full blown database. Now you can provision a schema just like we could provision a database instance for them before. It has basically the same capabilities. The interface looks exactly the same. The only difference is that at the end of this process, once the customer requested, you get a schema and a connect string to connect that schema. Our plan is we will pluggable databases very soon using this technology as well. So very soon, hopefully, we will be talking about Pluggable-Database-as-a-Service.

Okay, and so that brings me to the conclusion of my presentation. I went relatively quickly but I just wanted to give you a big picture of where we are with respect to Oracle Database 12c management. We have made a number of enhancements and I just wanted to talk about some of them that I thought would be of interest to you in the areas of embedded management, consolidation, quality of service, as well as in self-service IT.

So with that, I will turn it back over to my colleague, Scott McNeil.

Scott McNeil

Okay, thanks, Mughees. Great presentation. Now at this point, we would like to open it up for questions and answers. So if you have a question please enter into the web console and will try to go through as many as we can.

Earnings Call Part 2:

View Comments (0)