Search Results

Search found 31038 results on 1242 pages for 'michael best'.

Page 28/1242 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Epsilon : An Oracle Customer Profile

    - by Anand Akela
    ZDNet published an article today based on the interview of Jeff White, vice president, technology, strategic database services at Epsilon. Jeff discussed Oracle Exadata Database Machine and Oracle Enterprise Manager with the ZDNet writer Dan Kusnetzky . Read the article  Epsilon : An Oracle Customer Profile . Jeff White, Epsilon VP, was honored with Oracle’s Data Warehouse Leader of the Year for Innovative Data Warehouse Deployment of Oracle Exadata and Oracle Enterprise Manager earlier this year. In one of the videos earlier this year, Jeff mentioned that Epsilon has streamlined IT administration, monitoring, and engineered systems maintenance with Oracle Enterprise Manager. Having gained in operational efficiencies, Epsilon is now providing greater efficiencies to its customers. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Is Oracle certified to run on VMWare?

    - by Mike Dietrich
    This question in similar occurences gets asked during every Upgrade Workshop at least once. People would like to know if they can run an Oracle Database or Oracle Real Application Clusters or Oracle Grid Control or Oracle Fusion Middleware or ... in an VM environment with VMWare's virtualisation products. And the answer is: Yes, you can!! But ... there's a fine print you should take care on before setting up virtual environments with a different solution than XEN based Oracle VM. Please read Note:942852.1 - VMWare Certification for Oracle Products and Note:249212.1 - Support Position for Oracle Products Running on VMWare Virtualized Environments for further details: Support Status for VMware Virtualized Environments Oracle has not certified any of its products on VMware virtualized environments. Oracle Support will assist customers running Oracle products on VMware in the following manner: Oracle will only provide support for issues that either are known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware. If a problem is a known Oracle issue, Oracle support will recommend the appropriate solution on the native OS. If that solution does not work in the VMware virtualized environment, the customer will be referred to VMware for support. When the customer can demonstrate that the Oracle solution does not work when running on the native OS, Oracle will resume support, including logging a bug with Oracle Development for investigation if required. If the problem is determined not to be a known Oracle issue, we will refer the customer to VMware for support. When the customer can demonstrate that the issue occurs when running on the native OS, Oracle will resume support, including logging a bug with Oracle Development for investigation if required. NOTE: Oracle has not certified any of its products on VMware. For Oracle RAC, Oracle will only accept Service Requests as described in this note on Oracle RAC 11.2.0.2 and later releases.

    Read the article

  • Reminder: Totally Awesome and Totally Free Training SQL Server Training

    - by KKline
    One of the things that I enjoy about working for Quest Software is that we give back copiously to the community. From activities and offerings like SQLServerPedia , to our free posters mailed anywhere in North America (and don't forget the free hi-res PDFs for the rest of the world ), Don't forget that free DVDs of our virtual conferences featuring me, along with Buck Woody ( blog | twitter ) and Brent Ozar ( blog | twitter ) will be mailed anywhere in North America free of charge, now available...(read more)

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Transition from maintenance programming to design

    - by andrew wang
    What to do people do develop a design for a s/w for a given set of requirements? I like many people joined a Semiconductor MNC and got stuck in maintenance for quite a couple of years. My work was usually changing a lines of code for windows drivers supplied by my company or a couple of small script (style like) C programs for validating h/w. As a result I developed the bad habit of 'programming by coincidence'. I have not developed the ability for designing tools/programs from scratch. I was the only s/w member of the local team and thus some grunt work from the well established other site of the company came to be done by me. Now I have moved to a different company and thus finding developing from scratch very difficult. How do I unlearn my bad habit and develop this ability of designing s/w and then coding it ?

    Read the article

  • Transiltion from maintenance programing to design

    - by andrew wang
    What to do guys do develop a design for a s/w for a given set of requirements? I like many people joined a Semiconductor MNC and got stuck in maintenance for quite a couple of years. My work was usually changing a lines of code for windows drivers supplied by my company or a couple of small script (style like) C programs for validating h/w. As a result I developed the bad habit of 'programming by coincidence'. I have not developed the ability for designing tools/programs from scratch. I was the only s/w member of the local team and thus some grunt work from the well established other site of the company came to be done by me. Now I have moved to a different company and thus finding developing from scratch very difficult. How do I unlearn my bad habit and develop this ability of designing s/w and then coding it ?

    Read the article

  • How to structure a project that supports multiple versions of a service?

    - by Nick Canzoneri
    I'm hoping for some tips on creating a project (ASP.NET MVC, but I guess it doesn't really matter) against multiples versions of a service (in this case, actually multiple sets of WCF services). Right now, the web app uses only some of the services, but the eventual goal would be to use the features of all of the services. The code used to implement a service feature would likely be very similar between versions in most cases (but, of course, everything varies). So, how would you structure a project like this? Separate source control branches for each different version? Kind of shying away from this because I don't feel like branch merging should be something that we're going to be doing really often. Different project/solution files in the same branch? Could link the same shared projects easily Build some type of abstraction layer on top of the services, so that no matter what service is being used, it is the same to the web application?

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Interesting sessions/tips from RMOUG

    - by jean-pierre.dijcks
    One of the sessions I was at at last week's RMOUG was a session on Temp Tablespace Groups. I had a look because I had no experience with this and it seemed to help with parallel processing and the allocation/usage of temp. You can read the excellent write-up at Kellyn Pedersen's blog - who did the session and all the work - here. So for all of those who may be seeing lot's of waits like enq: TS - Contention when you are doing hash joins and sorts, do have a look at the above blog post. I also had the chance to listen in at Stewart Bryson's session on Restartability (he had 3 R-s) where he gave very useful tips about how to deal with your data warehouse loads. Questions like archive log mode - should I or should I not were well covered. Flashback archives, also nice to hear about. Very nice talk, very interesting. Unfortunately he hasn't blogged about it yes, so no pointers to that one. Got to see a couple of other interesting sessions, and as conferences go got to meet some interesting Oracle folks from the region. As usual RMOUG was useful and fun. Off to the drawing boards to design next year's session!

    Read the article

  • Why not AJAX'ify entire websites?

    - by Anonymous -
    Is there any solid reasoning as to why sites shouldn't be developed with ajax functionality that loads major parts of each part (assuming there are elements like the header, navigation etc that remain the same)? Surely it would be less resource-intensive since the server wouldn't have to serve content that appears on every page, benefiting both the host and end-user. Answer the question taking into consideration: The sites javascript behaviour degrades gracefully in every instance For my question I'm talking about new sites where this behaviour could be implemented rather from the off, so it doesn't technically cost any money - we're not returning to a finished product to implement it.

    Read the article

  • Pain of the Week/Expert's Perspective: Performance Tuning for Backups and Restores

    - by KKline
    First off - the Pain of the Week webcast series has been renamed. It's now known as The Expert's Perspective . Please join us for future webcasts and, if you're interested in speaking, drop me a note to see if we can get you on the roster! The bigger your databases get, the longer backups take. That doesn't really seem like a huge problem — until disaster strikes and you need to restore your databases as fast as possible. Join my buddy Brent Ozar ( blog | twitter ), a Microsoft Certified Master of...(read more)

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Good practice about Javascript referencing

    - by AngeloBad
    I am fighting about a web application script optimization. I have an ASP.NET web app that reference jQuery in the master page, and in every child page can reference other library or JavaScript extension. I would like to optimize the application with YUI for .NET. The question is, I should put all the libraries reference in the master page or to compress all the JavaScript code in a single file, or I should create a file for every page that contains only the code useful to the page? Is there any guidance to follow? Thanks!

    Read the article

  • Best tools to build an auction website

    - by Daniel Loureiro
    Can I get your feedback on the best tools to build an auction website with the following features: The site takes a commission (like 5%) on each transaction Each user can assign a rating (like 4.5 stars) to his completed transaction, and comment on the seller's profile. Accept payments in paypal and credit card I've been looking into Joomla! and JomSocial but they haven't convinced me much so far. I have some programming experience in C, Python and Java. If no CMS tools are of use I'd appreaciate if you could tell the best route to take in programming to get the auction site done.

    Read the article

  • Workflow Overview & Best Practices - EMEA

    - by Annemarie Provisero
    ADVISOR WEBCAST: Workflow Overview & Best Practices - EMEA PRODUCT FAMILY: EBS - ATG - Workflow   February 16, 2011 at 10:00 am CET, 02:30 pm India, 06:00 pm Japan, 08:00 pm Australia This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Tools and Utilities available to get a closer look into the Java Virtual Machine used in an E-Business Suite Environment and how to tune it. TOPICS WILL INCLUDE: Introduction of Workflow Useful Utilities and Tools Best Practices Q&A A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Best place to request Ubuntu for a minor improvement (In Unity dash search)

    - by mac
    Which is the best place to request Ubuntu for a minor improvement? My request feature is this : In Ubuntu dash when I search for "Upd" it gives me update manager and some other files. Now when I click enter by default the first entry will be selected. Can we make this a slightly better experience by highlighting the first item in search results which will be selected by default if we press enter - Just like in Gnome shell Search for upd in unity dash Search for upd in gnome-shell If you notice, update manager is highlighted by default in gnome shell and appears more intuitive. Can we implement the same in Unity ? Sorry for posting this in askubuntu. I just wanted to know which is the best place to discuss this. Thanks

    Read the article

  • What is the best way to restrict adult content on 12.04 LTS

    - by Stephen Myall
    I bought my kids a PC and installed 12.04 (Unity) on it. The bottom line is, I want my children to use the computer unsupervised while I have confidence they cannot access anything inappropriate. What I have looked at: I was looking at Scrubit which allows me configure my wifi router but this solution would also affect my other PC and mobile devices. This is not feasible as I just want the solution to work on one PC. I also did some Google searches and came across Nanny (it seems to look the part). My experience of OSS is that the best solutions frequently never appear first on a Google search list so my question is very specific. I want to leverage your knowledge and experience to understand “What is the best way to restrict adult content on 12.04 LTS” as this is important to me. Please don't answer this question "try this or that", then give me some PPA, I am looking for knowledge and experience from someone in my situation. Thanks in advance

    Read the article

  • Why are we as an industry not more technically critical of our peers? [closed]

    - by Jarrod Roberson
    For example: I still see people in 2011 writing blog posts and tutorials that promote setting the Java CLASSPATH at the OS environment level. I see people writing C and C++ tutorials dated 2009 and newer and the first lines of code are void main(). These are examples, I am not looking for specific answers to the above questions, but to why the culture of accepting sub-par knowledge in the industry is so rampant. I see people posting these same type of empirically wrong suggestions as answers on www.stackoverflow.com and they get lots of up votes and practically no down votes! The ones that get lots of down votes are usually from answering a question that wasn't asked because of lack of reading for comprehension skills, and not incorrect answers per se. Is our industry that ignorant as a whole, I can understand the internet in general being lazy, apathetic and un-informed but our industry should be more on top of things like this and way more critical of people that are promoting bad habits and out-dated techniques and information. If we are really an engineering discipline, why aren't people held to a higher standard as they are in other engineering disciplines? I want to know why people accept bad advice, poor practices as the norm and are not more critical of their peers in the software industry.?

    Read the article

  • Do most programmers cut & paste code?

    - by John MacIntyre
    I learned very early on that cutting & pasting somebody else's code takes longer in the long run that writing it yourself. In my opinion unless you really understand it, cut & paste code will probably have issues which will be a nightmare to resolve. Don't get me wrong, I mean finding other peoples code and learning from it is essential, but we don't just paste it into our app. We rewrite the concepts into our app. But I'm constantly hearing about people who cut & paste, and they talk about it like it's common practice. I also see comments by others which indicate it's common practice. So, do most programmers cut & paste code?

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is '@' Error Suppression a Valid Technique for Testing for an Optional Array Key?

    - by MikeSchinkel
    Rarst and I were debating offline about the use of the '@' error suppression operator in PHP, specifically for use to test for existence of "optional" array keys, i.e. array keys that are being used as a switch here a their lack of existence in the array is functionally equivalent to the array having the key with a value equaling false. Here is pseudo-code for this scenario: function do_something( $args = array() ) { if ( @$args['switch'] ) { // Do something with this switch } // continue on... } vs. this approach: function do_something( $args = array() ) { if ( ! empty( $args['switch'] ) && $args['switch'] ) { // Do something with this switch } // continue on... } Of course in most use-cases, suppressing errors would not be A Good Thing(tm). However in this use-case where an array is passed with an optional element, it seems to me that it is actually a very good technique but I could be wrong and would like to hear other's opinions on the subject before I make up my mind. I do know that there are alleged performance hits for using the former approach but I'd like to know how they compare with the alternative and if they performance hits really matter in real world scenarios? P.S. I decided to post this because, after debating this offline with Rarst, he asked a more general question here on Programmers but didn't actually give a detailed example of the specific use-case we were debating. And since I'm pretty sure he'll want to use the out-of-context answers on that other question as justification for why the above is "bad" I decided I needed to get opinions on this specific use-case.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >