Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 342/469 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • Upgrades from Beta or CTP SQL Server Software is NOT Supported

    - by BuckWoody
    As of this writing, SQL Server 2008 R2 has released, and just like every release, I get e-mails and calls from folks with this question: “Can I upgrade from Customer Technical Preview (CTP) x or Beta #x or Release Candidate (RC) to the “Released to Manufacturing” (RTM) version?” No. Right up until the last minute, things are changing in the code – and you want that to happen. Our internal testing runs right up until the second we lock down for release, and we watch the CTP/RC/Beta reports to make sure there are no show-stoppers, and fix what we find. And it’s not just “big” changes you need to worry about – a simple change in one line of code can have a massive effect. I know, I know – you’ve possibly upgraded an RC or CTP to the RTM version and it worked “just fine”. But hear this tale: I’ve dealt with someone who faced this exact situation in SQL Server 2008. They upgraded (which is clearly prohibited in the documentation) from a CTP to the RTM version over a year ago. Everything was working fine. But then…one day they had an issue. Couldn’t fix it themselves, we took a look, days went by, and we finally had to call in the big guns for support. Turns out, the upgrade was the problem. So we had to come up with some elaborate schemes to get the system migrated over while they were in production. This was painful for everyone involved. So the answer is still no. Just don’t do it. There is one caveat to this story – if you are a “TAP” customer (you’ll know if you are), we help you move from the CTP products to RTM, but that’s a special case that we track carefully and send along special instructions and tools to help you along. That level of effort isn’t possible on a large scale, so it’s not just a magic tool that we run to upgrade from CTP to RTM. So again, unless you’re a TAP customer, it’s a no-no. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • EBusiness Maintenace Wizard

    - by cwarticki
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Seriously folks, you'd be amazed by the power and functionality of this tool.  If you're an EBus customer, you must use the Maintenance Wizard.  I know customers that have logged 2000+ SRs doing EBus upgrades the hard way and others that have use the Maintenance Wizard and have performed production upgrades on 7 global instances with only a handful of SRs.  You decide which is better. Oh, btw......it's part of your Premier Support investment. No additional cost necessary. -Chris Warticki The Maintenance Wizard is an E-Business Suite upgrade tool that can guide you through the code line upgrade process from 11.5.10.2 to 12.1.3 with an 11gR2 database. Additionally, it includes maintenance features for most releases of E-Business Suite applications. The Tool: * Presents step-by-step upgrade and maintenance processes * Enables validation of each step, tracks the completion of the steps, and maintains a log and status * Is a multi-user tool that enables the System Administrator to give different users assignments based on any combination of category, product family or task * Automatically installs many required patches * Provides project management utilities to record the time taken for each task, completion status and project reporting For More Information: * Review Note 215527.1 for additional information on the Maintenance Wizard * See Note 430732.1 to download the new Patch Sincerely, Oracle Proactive Support Center

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • SSMS Tools Pack 3.0 is out. Full SSMS 2014 support and improved features.

    - by Mladen Prajdic
    With version 3.0 the SSMS 2014 is fully supported. Since this is a new major version you'll eventually need a new license. Please check the EULA to see when. As a thank you for your patience with this release, everyone that bought the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade. You won't have to do anything for this to take effect. First thing you'll notice is that the UI has been completely changed. It's more in line with SSMS and looks less web-like. Also the core has been updated and rewritten in some places to be better suited for future features. Major improvements for this release are: Window Connection Coloring Something a lot of people have asked me over the last 2 years is if there's a way to color the tab of the window itself. I'm very glad to say that now it is. In SSMS 2012 and higher the actual query window tab is also colored at the top border with the same color as the already existing strip making it much easier to see to which server your query window is connected to even when a window is not focused. To make it even better, you can not also specify the desired color based on the database name and not just the server name. This makes is useful for production environments where you need to be careful in which database you run your queries in. Format SQL The format SQL core was rewritten so it'll be easier to improve it in future versions. New improvement is the ability to terminate SQL statements with semicolons. This is available only in SSMS 2012 and up. Execution Plan Analyzer A big request was to implement the Problems and Solutions tooltip as a window that you can copy the text from. This is now available. You can move the window around and copy text from it. It's a small improvement but better stuff will come. SQL History Current Window History has been improved with faster search and now also shows the color of the server/database it was ran against. This is very helpful if you change your connection in the same query window making it clear which server/database you ran query on. The option to Force Save the history has been added. This is a menu item that flushes the execution and tab content history save buffers to disk. SQL Snippets Added an option to generate snippet from selected SQL text on right click menu. Run script on multiple databases Configurable database groups that you can save and reuse were added. You can create groups of preselected databases to choose from for each server. This makes repetitive tasks much easier New small team licensing option A lot of requests came in for 1 computer, Unlimited VMs option so now it's here. Hope it serves you well.

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • How to use tscon on Windows7?

    - by Radek
    I need to run overnight automation testing using RFT and IE on Windows7 virtual machine. I found that restarting the Windows box before the testing starts helps. I am moving the production environment from Windows XP to Windows 7. RFT used to complain when running RFT scripts that CRFCN0557E: Activation failed when running under a Terminal Services environment. This may be caused by using a minimized terminal window - try playing back without minimizing the terminal window (it does not need to be full-screen). Running tscon.exe 0 /dest:console prior starting any RFT script fix the error on Windows XP. But not on Windows7. I did some research and was trying for hours to fix that but nothing helped. There is no screen saver turned on on Windows7. I tried to run both but nothing helped. tscon.exe 0 /dest:console tscon.exe 1 /dest:console On Windows7 tscon returns {ErrorPrintf(): LoadString failed, Error 15105, (0x00003B01)} Error [15105]:The resource loader cache doesn't have loaded MUI entry. Error [0]:The operation completed successfully. On Windows XP tscon returns Could not connect sessionID 0 to sessionname console, Error code 7045 Error [7045]:The requested session access is denied. I just double checked that running tscon.exe 0 /dest:console on Windows XP solves the issue. Cannot understand the output of the tscon command then. Any idea how I can run RFT scripts after I restart the Windows box automatically? Preferably without involving any other computer. I was even thinking to use the old Windows XP to make remote desktop session to make RFT happy. I hope there is other better solution to that.

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Configure IIS7.5 to allow calls to asmx web services.

    - by goodeye
    Hi, I migrated a site from IIS6 to Windows Server 2008 R2 IIS7.5. It has an asmx web service, which is working fine locally, but returns this 500 error when called from another machine: Request format is unrecognized for URL unexpectedly ending in /myMethodName The solution in previous versions is to add this to the web.config for the protocols needed (typically omitting HttpGet for production): <system.web> <webServices> <protocols> <add name="HttpGet" /> <add name="HttpPost" /> <add name="HttpSoap" /> </protocols> </webServices> </system.web> This is posted everywhere, including http://stackoverflow.com/questions/657313/request-format-is-unrecognized-for-url-unexpectedly-ending-in For IIS7.5, this throws a configuration error; I understand this section doesn't belong, but tried it anyway. I also boiled down the asmx call to a simple hello world. I tested with POST also, just to eliminate any issues with GET. What is the equivalent for IIS7.5? - either web.config format or the UI button to push would be really helpful. Thanks, Bob

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • SQL Server: How to shrink FileStream files?

    - by J4N
    For a project, I'm using a SQL Server 2008 R2. One table has a filestream column. I've made some load tests, and now the database has ~20GB used. I've empty tables, except several(configuration tables). But my database was still using a lot of space. So I used the Task -> Shrink -> Database / Files But my database is still using something like 16GB. I found that it's the filestream file is still using a lot of space. The problem is that I need to backup this database to export it on the final production server, and event if I indicate to compress the backup I got a file more than 3.5Go. Not convenient to store and upload. And I'm planning much bigger test, so I want to know how to shrink that empty space. When I'm trying: I get this exception: The properties SIZE, MAXSIZE, or FILEGROWTH cannot be specified for the FILESTREAM data file 'FileStreamFile'. (Microsoft SQL Server, Error: 5509) So what should I do? I found several topics with this error but they was about removing the filestream column.

    Read the article

  • My SMTP's outgoing mail gets bounced

    - by BloodPhilia
    I've got a ISPconfig 3 production server set up, running Ubuntu Server 9.04. My e-mail gets delivered ok to almost every other server I send mail to except for one (smtp.chello.nl which bounces my email). In my /var/log/mail.err I found the below error. Sep 23 08:59:33 <MYHOSTNAME> postfix/smtp[26944]: 3DB2B1456149: to=<<RECIPIENT>@chello.nl>, relay=smtp.chello.nl[213.46.255.2]:25, delay=2, delays=0.02/0.01/1.9/0.04, dsn=5.1.0, status=bounced (host smtp.chello.nl[213.46.255.2] said: 550 5.1.0 Dynamic/Generic hostnames are blocked. Please contact your Email Provider. Your IP was <MY IP>. Your hostname was ??. (in reply to MAIL FROM command)) What could be the cause of this? I did an SMTP check on mxtools.com and got the following: OK - Not an open relay OK - 0 seconds - Good on Connection time OK - 1.482 seconds - Good on Transaction time OK - 83.161.xx.xx resolves to a83-161-xx-xx.xxx.xxx.nl WARNING - Reverse DNS does not match SMTP Banner Update: My IP is static.

    Read the article

  • Virtual SMTP not sending mails

    - by DoStuffZ
    Hi I have been googling for the better part of the last two hours without finding any conclusion. My mails are not being sent from the production webserver. If I stop/start the Virtual SMTP I get this in the event log: No usable TLS server certificate for SMTP virtual server instance '1' could be found. TLS will be disabled for this virtual-server. We recently updated the webapplication running and I assume something went amiss during that. Googling the message straight up gave me a list that just as well could have been in greek. I found a security certificate on the server, installing that gave no change. I basicly played russian roulette with the certificate file (.cer), though I was somewhat certain it would not have a negative effect. (Russian roulette with a 6 chamber gun and 2 bullets.) I found a .pfx in our local documentation folder, though I'm far from certain that to have a positive effect. (6 chambers and 5 bullets). I found a site describing how the Virtual SMTP - Properties - Access - should have a button saying Certificates. I have a text saying "Did not find any TLS certificates" and a grayed out tick box saying "Require TLS certificate". I found the TLS being SSL ver3.1+ (3.1-3.3). So question goes - How do I enable the SMTP to once again send emails, like before.

    Read the article

  • apache Client Certificate Authentication errors: Certificate Verification: Error (18): self signed certificate

    - by decoy
    So I have been following instructions on setting up Client Certificate Authentication in Apache2 w/ mod_ssl. This is solely for the purpose of testing an application against CAA, not for any sort of production use. So far I've followed http://www.impetus.us/~rjmooney/projects/misc/clientcertauth.html for advice on generating my CA, server, and client encryption information. I've put all three of them into /etc/ssl/ca/private. I've setup the following additional directives in my default_ssl site file: <IfModule mod_ssl.c> <VirtualHost _default_:443> ... SSLEngine on SSLCertificateFile /etc/ssl/ca/private/server.crt SSLCertificateKeyFile /etc/ssl/ca/private/server.key SSLVerifyClient require SSLVerifyDepth 2 SSLCACertificatePath /etc/ssl/ca/private SSLCACertificateFile /etc/ssl/ca/private/ca.crt <Location /> SSLRequireSSL SSLVerifyClient require SSLVerifyDepth 2 </Location> <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> ... </VirtualHost> </IfModule> I've install the p12 file into Chrome, but when I go to visit https://localhost, I get the following errors Chrome: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. Apache: Certificate Verification: Error (18): self signed certificate If I had to guess, one of my directives is not setup right to load and verify the p12 w/ my self created CA. But I can't for the life of me figure out what it is. Would anyone have more experience here who could point me in the right direction?

    Read the article

  • How do I upgrade PHP on CentOS and Kloxo?

    - by Emerson
    I need to upgrade PHP so that I can upgrade joomla on my dedicated server. I have: kloxo 6.1.6 php-5.2.17-1 Linux CentOS-55-64-minimal 2.6.18-194.32.1.el5 x86_64 x86_64 x86_64 GNU/Linux I searched everywhere and I could only find that PHP 5.3 isn't compatible with zend. I would like to upgrade to 5.2.4, which is the minimum for joomla 1.6 and 1.7. I tried to run: yum update php.x86_64 Which is the PHP package installed, but it didn't work. This is a production server with quite a few users across many sites, so I wanted to do it as safely as possible. Is it safe to run "yum update"? It showed me 6 packages to install and 125 packages to update, including a kernel. Is that safe? I haven't touched kloxo's yum repositories. Update: I just successfully ran "yum update". Now I think I need to know how to add a new repository that has the 5.2.4 and how to update to that specific version. Any ideas?

    Read the article

  • How to mount a LOFS in Solaris that doesn’t cross mountpoints

    - by jcea
    I need to access my "root" ZFS dataset to delete a file under "/var". But "/var" is overlayed by another ZFS dataset. Since these are system datasets I can't "umount" them while the machine is running. And I want to avoid to reboot the system in "failsafe" mode, since this is a production machine. Teorically ZFS would refuse to mount "/var" dataset over the underlying "/var", because it is not empty. But it works, possibly because they are system datasets mounted early in the boot process. But having the underlying "/var" not empty is preventing me to create an ABE (Alternate Boot Environment), so patching is risky, and I can't upgrade my system using Live Upgrade. The machine is remote. I have an IP KVM, but I rather prefer to avoid booting this machine in "failsafe" mode, if I can. I know there is a file in "/var/" because I can snapshot the "root" dataset and check it. But snapshots are read-only, so I can't get rid of the file. I tried "mkdir /tmp/zzz; mount -F lofs / /tmp/zzz", but when I go to "/tmp/zzz/var", I see the "/var" dataset, not the underlying "root" dataset. That is, the LOFS is crossing mountpoints. I would usually like it, but not this time!. Any suggestion, beside rebooting the machine in "failsafe" and mess with it thru the IP KVM?

    Read the article

  • WebLogic embedded LDAP crashes

    - by Spiff
    Our production admin server (WebLogic 10.3.5 running on Solaris 10) crashes from time to time. Logs show tons of these errors (several each minute): <1-Jun-2012 2:28:34 o'clock AM EDT> <Critical> <EmbeddedLDAP> <BEA-000000> <java.lang.NullPointerException at weblogic.socket.DevPollSocketMuxer.cleanupSocket(DevPollSocketMuxer.java:150) at weblogic.socket.DevPollSocketMuxer.cancelIo(DevPollSocketMuxer.java:166) at weblogic.socket.SocketMuxer.deliverExceptionAndCleanup(SocketMuxer.java:836) at weblogic.socket.SocketMuxer.deliverEndOfStream(SocketMuxer.java:760) at weblogic.ldap.MuxableSocketLDAP$LDAPSocket.close(MuxableSocketLDAP.java:128) at com.octetstring.vde.Connection.close(Connection.java:166) at com.octetstring.vde.WorkThread.executeWorkQueueItem(WorkThread.java:89) at weblogic.ldap.LDAPExecuteRequest.run(LDAPExecuteRequest.java:50) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209) at weblogic.work.ExecuteThread.run(ExecuteThread.java:178) Eventually, the admin server runs out of memory: <1-Jun-2012 12:29:59 o'clock PM EDT> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed java.lang.OutOfMemoryError: GC overhead limit exceeded. One does not necessarily cause the other, but it seems like a pretty good fit. When inspecting the WebLogic code, we see this: void cleanupSocket(MuxableSocket paramMuxableSocket, SocketInfo paramSocketInfo) { this.sockRecords[paramSocketInfo.getFD()] = null; // DevPollSocketMuxer.java:150 super.cleanupSocket(paramMuxableSocket, paramSocketInfo); } protected void cancelIo(MuxableSocket paramMuxableSocket) { super.cancelIo(paramMuxableSocket); cleanupSocket(paramMuxableSocket, paramMuxableSocket.getSocketInfo()); // DevPollSocketMuxer.java:166 } So paramMuxableSocket.getSocketInfo() would be null. I'm at a loss for explaining this... Anyone have an idea? Thanks!

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >