Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 244/344 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • How best to look up objects by label?

    - by dsollen
    I am writing the server backed by a pre-written API. I'm going to get a number of strings representing ports, signals, paths, etc etc etc. I need to look up the object associated with a given label, these objects are all in memory (no sql magic to do this for me). My question is, how best do I associate a given unique label with the mutable object it represents? I have enough objects that looking through every signal or every port to find the one that matches is possible, but may be slightly too slow. To be honest the direct 'look at every object' method is probably good enough for so small a body of objects and anything else is premature optimization, but I still am curious what the proper solution would be if I thought my signals were going to grow a bit larger. As I see it there are two options available. First would be to to create a 'store' that is a simple map between object and label. I could have it so that every time I call addObject the object is automatically saved into a hashmap or the like. This works, but relies on my properly adding and deleting each object so the map doesn't grow indefinitely. The biggest issue to me is that this involves having some hidden static map in my ModelObject class that just feels...wrong somehow. The other option is to have some method that can interpret the labels. All of these labels are derived from the underlying objects. So I can look at the signal label, for instance, and say "these 20 characters are the port" to figure out what port I need. This would allow me to quickly figure out what I need. However, if the label method is changed the translateLabelToObject method needs to be updated as well or everything breaks. Which solution is cleaner, or possibly a cleaner solution than either of above? For the record I'm working with sufficient number of variables to make direct comparison a little slow, but not enough to be concerned about memory overhead, written in java. All objects that have labels I need to look up extend the same parent class.

    Read the article

  • Notes - Part I - Say Hello from Java

    - by Silviu Turuga
    Sometimes we need to take small notes to remember things, one way to do this is to use stick notes and have them all around our desktop. But what happening if you have a lot of notes and a small office? You'll need a piece of software that will sort things for you and also it will provide you a quick way to retrieve the notes when need. Did I mention that this will keep your desktop clean and also will reduce paper waste? During the next days we'll gonna create an application that will let you manage your notes, put them in different categories etc. I'll show you step by step what do you need to do and finally you'll have the application run on multiple systems, such as Mac, Windows, Linux, etc. The only pre-requisition for this lesson is to have JDK 7 with JavaFX installed and an IDE, preferably NetBeans. I'll call this application Notes…. Part I - Say Hello from Java  From NetBeans go to Files->New Project Chose JavaFX->JavaFX FXML Application Project Name: Notes FXML name: NotesUI Check Create Application Class and name it Main After this the project is created and you'll see the following structure As a best practice I advice you to have your code in your own package instead of the default one. right click on Source Packages and chose New->Java Package name it something like this: com.turuga.notes and click Next after the package is created, select all the 3 files from step #3 and drag them over the new package chose Refactor, as this will make sure all the references are correctly moved inside the new package now you should have the following structure if you'll try to run the project you'll get an error: Unable to find class: Main right click on project name Notes and click properties go to Run and you'll see Application Class set to Main, but because we have defined our own packages, this location has been change, so click on Browse and the correct one appear: com.turuga.notes.Main last modification before running the project is to right click on NotesUI.fxml and chose Edit (if you'll double click it will open in JavaFX Scene Builder) look around line 9 and change fx:controller="NotesUIController" to fx:controller="com.turuga.notes.NotesUIController" now you are ready to run it and you should see the following On the next lesson we'll continue to play with NetBeans and start working on the interface of our project

    Read the article

  • SSMS Tools Pack 3.0 is out. Full SSMS 2014 support and improved features.

    - by Mladen Prajdic
    With version 3.0 the SSMS 2014 is fully supported. Since this is a new major version you'll eventually need a new license. Please check the EULA to see when. As a thank you for your patience with this release, everyone that bought the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade. You won't have to do anything for this to take effect. First thing you'll notice is that the UI has been completely changed. It's more in line with SSMS and looks less web-like. Also the core has been updated and rewritten in some places to be better suited for future features. Major improvements for this release are: Window Connection Coloring Something a lot of people have asked me over the last 2 years is if there's a way to color the tab of the window itself. I'm very glad to say that now it is. In SSMS 2012 and higher the actual query window tab is also colored at the top border with the same color as the already existing strip making it much easier to see to which server your query window is connected to even when a window is not focused. To make it even better, you can not also specify the desired color based on the database name and not just the server name. This makes is useful for production environments where you need to be careful in which database you run your queries in. Format SQL The format SQL core was rewritten so it'll be easier to improve it in future versions. New improvement is the ability to terminate SQL statements with semicolons. This is available only in SSMS 2012 and up. Execution Plan Analyzer A big request was to implement the Problems and Solutions tooltip as a window that you can copy the text from. This is now available. You can move the window around and copy text from it. It's a small improvement but better stuff will come. SQL History Current Window History has been improved with faster search and now also shows the color of the server/database it was ran against. This is very helpful if you change your connection in the same query window making it clear which server/database you ran query on. The option to Force Save the history has been added. This is a menu item that flushes the execution and tab content history save buffers to disk. SQL Snippets Added an option to generate snippet from selected SQL text on right click menu. Run script on multiple databases Configurable database groups that you can save and reuse were added. You can create groups of preselected databases to choose from for each server. This makes repetitive tasks much easier New small team licensing option A lot of requests came in for 1 computer, Unlimited VMs option so now it's here. Hope it serves you well.

    Read the article

  • Spotlight on an office - Moscow

    - by Maria Sandu
    Probably the most famous place in Moscow, after Red Square, is the centre of Moscow. Here you can find beautiful buildings that seem to touch the sky, located on the banks of the river. In one of these high towers you can find the Oracle offices, friendly and modern. The stunning view will keep capture your attention for a couple of minutes and then you can enjoy a delicious coffee and take a seat at your desk, starting a new day. My name is Dmitry and I can tell you that we’re enjoying every minute spent in the office and that’s because of the pleasant atmosphere. As soon as you enter the offices, the friendly environment will make you feel more relaxed. Even though the space is split between the different departments, we interact and communicate a lot. We take our cup of coffee or tea together and discuss our achievements and all sort of subjects in the kitchen or in the open space. One of my favorite parts are the festive events when we celebrate with cakes and goodies. Any birthday or new arrival is a good reason for a tea party! We have some work-related traditions that help us as employees. One of them is the monthly Tech Hour when Experts from the Pre-sales team discuss technical topics and about the most recent innovations within the company. Lunch is another good opportunity to interact and chat. We have a variety of options, such as the two kitchens or the vast number of restaurants where you can serve up anything you want. As we are right in the centre of Moscow, you can choose between Sushi, Italian Pasta and all sorts of food. We usually go with our colleagues to have lunch. If you care about your health, I have very good news for you as nearby there are two first-class fitness centres with swimming pools, yoga and various sport classes that you can attend. My suggestion would be to either start or end your day with a visit to the swimming pool for a well-deserved hour of relaxation. As I mentioned before, we’re right in the heart of Moscow, so after work you can spend some time in the large shopping centers where you can choose between many different entertainment options. We often go to bowling or to the cinema. I hope I have given you a glimpse into working life at the Oracle offices in Moscow, a really great and pleasant place to work in, so follow us on http://campus.oracle.com for our latest vacancies and internships.

    Read the article

  • Misused mke2fs and cannot boot into system

    - by surlogics
    I installed Ubuntu with WUBI in Windows 7 64bit, and I had installed Mandriva 2011 with a disk. I tried to learn Linux with Ubuntu and misused mke2fs; after I reboot my computer, Windows 7 and Ubuntu has crashed. As I have Mandriva, I boot into Mandriva and found # df -h /dev/sda7 12G 9.8G 1.5G 88% / /dev/sda2 15G 165M 14G 2% /media/logical /dev/sda6 119G 88G 32G 74% /media/2C9E85319E84F51C /dev/sda5 118G 59G 60G 50% /media/D25A6DDE5A6DBFB9 /dev/sda9 100G 188M 100G 1% /media/ae69134a-a65e-488f-ae7f-150d1b5e36a6 /dev/sda1 100M 122K 100M 1% /media/DELLUTILITY /dev/sda3 98G 81G 17G 83% /media/OS # fdisk /dev/sda Command (m for help): p Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd24f801e Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 6 FAT16 /dev/sda2 * 206848 30926847 15360000 7 HPFS/NTFS/exFAT /dev/sda3 30926848 235726847 102400000 7 HPFS/NTFS/exFAT /dev/sda4 235728864 976771071 370521104 f W95 Ext'd (LBA) /dev/sda5 235728896 481488895 122880000 7 HPFS/NTFS/exFAT /dev/sda6 727252992 976771071 124759040 7 HPFS/NTFS/exFAT /dev/sda7 481500243 506674034 12586896 83 Linux /dev/sda8 506674098 514851119 4088511 82 Linux swap / Solaris /dev/sda9 514851183 727246484 106197651 83 Linux Partition table entries are not in disk order I think I may used the following command mke2fs -j -L "logical"/dev/sda2 but I had forgotten what kind of partition it was before I transfered it into ext3. perhaps ntfs Data was not lost, and I can view my files as I could in Windows. In Mandriva, there are following disks: 117.2 GB hard disk, files in it is the same as my Windows D:, and Ubuntu was installed in it; 119.0 GB hard disk is my G:, with my personal files in it; 12.0 GB is the same with Mandriva / (with means root), 101.3 GB hard disk with nothing but lost+found; DELLUTILITY should be Dell computer utilities pre-installed in my computer; logical is the disk which I had spoiled, I can view nothing but lost+found; and OS is the C: in my Windows. After I boot, grub lets me choose Mandriva or Windows. I chose Windows and it tells me: FILE system type unknown, partition type 0x7 Error 13: Invalid or unsupported executable format I doubt something wrong with windows MBR or something # cat /boot/grub/menu.lst timeout 5 color black/cyan yellow/cyan gfxmenu (hd0,6)/boot/gfxmenu default 0 title linux kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot logo.nologo quiet resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd splash=silent vga=788 initrd (hd0,6)/boot/initrd.img title linux-nonfb kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux-nonfb root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd initrd (hd0,6)/boot/initrd.img title failsafe kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=failsafe root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot failsafe initrd (hd0,6)/boot/initrd.img title windows root (hd0,1) makeactive chainloader +1 I can boot into Linux, but not Ubuntu, it boot into Mandriva. I don't have a boot disk. Help me find a way to make it work again.

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • New laptop, Windows 8.1, attempting dual install. Ubuntu installer doesn't 'see' existing OS

    - by Flaminica
    Though I've used Ubuntu for a few years, I'm new to installation. Previously I had help and now I'm doing it alone (moved across the world). Windows 8.1 came preinstalled on my new laptop (Toshiba Satellite C70-A-17C - Core i5, 8 GB RAM, 750 GB HDD). I have already followed a few steps I found online to prepare for a dual install (with Ubuntu 14.04). I backed up Windows, created a bootable Ubuntu USB and DVD (just in case one didn't work), turned off fast boot and secure boot, and shrunk C:/. The new unallocated drive portion is 292.97 GB. After shrinking C:/, I restarted Windows a couple of times to make sure everything was working fine (it is). I then attempted to install with the Ubuntu live USB. However, the Ubuntu installer doesn't see that Windows 8.1 is already installed. I don't understand, and don't want to mess with Ubuntu partitioning when I don't know where the partitions will be created. My concern is that, if I go further with the installation process, Windows might be overwritten or compromised in some way. I then tried to reboot using the Ubuntu live DVD, thinking I might get a different result. However, I can't figure out how to make the laptop boot from the CD drive. I went into the BIOS and found no option there, either. Any help is very appreciated! EDIT: Looks like I can't link directly to each photo. Here is my album of screenshots: http://imgur.com/a/zChCo Here you can see that there's no option to boot from CD drive, only USB. Everything looks okay so far. I don't understand this. Ubuntu has not yet been installed. Unmounting partitions? (I chose 'no'.) Even though the laptop came pre-installed with Windows 8.1, the Ubuntu USB installer can't see it. I chose 'something else'. I need to pick and format partitions. I scrolled down and took a second shot to include all information. Completely lost and cancelled installation.

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • How to use tscon on Windows7?

    - by Radek
    I need to run overnight automation testing using RFT and IE on Windows7 virtual machine. I found that restarting the Windows box before the testing starts helps. I am moving the production environment from Windows XP to Windows 7. RFT used to complain when running RFT scripts that CRFCN0557E: Activation failed when running under a Terminal Services environment. This may be caused by using a minimized terminal window - try playing back without minimizing the terminal window (it does not need to be full-screen). Running tscon.exe 0 /dest:console prior starting any RFT script fix the error on Windows XP. But not on Windows7. I did some research and was trying for hours to fix that but nothing helped. There is no screen saver turned on on Windows7. I tried to run both but nothing helped. tscon.exe 0 /dest:console tscon.exe 1 /dest:console On Windows7 tscon returns {ErrorPrintf(): LoadString failed, Error 15105, (0x00003B01)} Error [15105]:The resource loader cache doesn't have loaded MUI entry. Error [0]:The operation completed successfully. On Windows XP tscon returns Could not connect sessionID 0 to sessionname console, Error code 7045 Error [7045]:The requested session access is denied. I just double checked that running tscon.exe 0 /dest:console on Windows XP solves the issue. Cannot understand the output of the tscon command then. Any idea how I can run RFT scripts after I restart the Windows box automatically? Preferably without involving any other computer. I was even thinking to use the old Windows XP to make remote desktop session to make RFT happy. I hope there is other better solution to that.

    Read the article

  • Using dd-wrt Dynamic DNS client with CloudFlare

    - by Roman
    I'm trying to configure Dynamic DNS client on my router with dd-wrt (v24-sp2) firmware so it would dynamically change IP address in one of the DNS records. Unfortunately I encountered a problem… Here is an example request from their ddclient configuration: https://www.cloudflare.com/api.html?a=DIUP&u=<my_login>&tkn=<my_token>&ip=<my_ip>&hosts=<my_record> It works if I use it in browser, but in dd-wrt I get this output: Tue Jan 24 00:36:47 2012: INADYN: Started 'INADYN Advanced version 1.96-ADV' - dynamic DNS updater. Tue Jan 24 00:36:47 2012: I:INADYN: IP address for alias '<my_record>' needs update to '<my_ip>' Tue Jan 24 00:36:48 2012: W:INADYN: Error validating DYNDNS svr answer. Check usr,pass,hostname! (HTTP/1.1 303 See Other Server: cloudflare-nginx Date: Mon, 23 Jan 2012 14:36:48 GMT Content-Type: text/plain Connection: close Expires: Sun, 25 Jan 1981 05:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: https://www.cloudflare.com/api.html?a=DIUP&u=<my_login>&tkn=<my_token>&ip=<my_ip>&hosts=<my_record> Vary: Accept-Encoding Set-Cookie: __cfduid=<id>; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.cloudflare.com Set-Cookie: __cfduid=<id>; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.www.cloudflare.com You must include an `a' paramiter, with a value of DIUP|wl|chl|nul|ban|comm_news|devmode|sec_lvl|ipv46|ob|cache_lvl|fpurge_ts|async|pre_purge|minify|stats|direct|zone_check|zone_ips|zone_errors|zone_agg|zone_search|zone_time|zone_grab|app|rec_se URL from "Location" works perfectly and parameter "a" is included. What's the problem?

    Read the article

  • Unable to Install SQL Server on Server 2012

    - by Jeff
    The problem I have been trying to install SQL Server 2012 on Windows Server 2012. I continually get the same error: Managed SQL Server Installer has stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: scenarioengine.exe Problem Signature 02: 11.0.3000.0 Problem Signature 03: 5081b97a Problem Signature 04: Microsoft.SqlServer.Chainer.Setup Problem Signature 05: 11.0.3000.0 Problem Signature 06: 5081b97a Problem Signature 07: 18 Problem Signature 08: 0 Problem Signature 09: System.IO.FileLoadException OS Version: 6.2.9200.2.0.0.272.79 Locale ID: 1033 Additional Information 1: c319 Additional Information 2: c3196e5863e32e0baf269d62f56cbc70 Additional Information 3: 422d Additional Information 4: 422d950c58f4efd1ef1d8394fee5d263 What I've tried After initial googling, I've tried the following things: Go through the list of hardware and software pre-reqs. All the software seems to be there by default on Server 2012 and my hardware meets the reqs. Copy the installation media to the local drive and try to install from that (rather than a DVD). This produced the same error. Based on another error message, I installed .NET 4.0 (which apparently is not on Server 2012 out of the box). Same error. Install from command line. This didn't work either, but it gave me a different error: Error: Unhandled Exception: System.IO.FileLoadException: Could not load file or assembl y 'Microsoft.SqlServer.Configuration.Sco, Version=11.0.0.0, Culture=neutral, Pub licKeyToken=89845dcd8080cc91' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A) ---> System.Security.SecurityExcep tion: Strong name validation failed. (Exception from HRESULT: 0x8013141A) --- End of inner exception stack trace --- at Microsoft.SqlServer.Chainer.Infrastructure.InputSettingService.CheckForBoo leanInputSettingExistenceFromCommandLine(ServiceContainer context, String settin gName) at Microsoft.SqlServer.Chainer.Setup.Setup.DebugBreak(ServiceContainer contex t) at Microsoft.SqlServer.Chainer.Setup.Setup.Main() Any ideas what I am missing?

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • Configure IIS7.5 to allow calls to asmx web services.

    - by goodeye
    Hi, I migrated a site from IIS6 to Windows Server 2008 R2 IIS7.5. It has an asmx web service, which is working fine locally, but returns this 500 error when called from another machine: Request format is unrecognized for URL unexpectedly ending in /myMethodName The solution in previous versions is to add this to the web.config for the protocols needed (typically omitting HttpGet for production): <system.web> <webServices> <protocols> <add name="HttpGet" /> <add name="HttpPost" /> <add name="HttpSoap" /> </protocols> </webServices> </system.web> This is posted everywhere, including http://stackoverflow.com/questions/657313/request-format-is-unrecognized-for-url-unexpectedly-ending-in For IIS7.5, this throws a configuration error; I understand this section doesn't belong, but tried it anyway. I also boiled down the asmx call to a simple hello world. I tested with POST also, just to eliminate any issues with GET. What is the equivalent for IIS7.5? - either web.config format or the UI button to push would be really helpful. Thanks, Bob

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • SQL Server: How to shrink FileStream files?

    - by J4N
    For a project, I'm using a SQL Server 2008 R2. One table has a filestream column. I've made some load tests, and now the database has ~20GB used. I've empty tables, except several(configuration tables). But my database was still using a lot of space. So I used the Task -> Shrink -> Database / Files But my database is still using something like 16GB. I found that it's the filestream file is still using a lot of space. The problem is that I need to backup this database to export it on the final production server, and event if I indicate to compress the backup I got a file more than 3.5Go. Not convenient to store and upload. And I'm planning much bigger test, so I want to know how to shrink that empty space. When I'm trying: I get this exception: The properties SIZE, MAXSIZE, or FILEGROWTH cannot be specified for the FILESTREAM data file 'FileStreamFile'. (Microsoft SQL Server, Error: 5509) So what should I do? I found several topics with this error but they was about removing the filestream column.

    Read the article

  • My SMTP's outgoing mail gets bounced

    - by BloodPhilia
    I've got a ISPconfig 3 production server set up, running Ubuntu Server 9.04. My e-mail gets delivered ok to almost every other server I send mail to except for one (smtp.chello.nl which bounces my email). In my /var/log/mail.err I found the below error. Sep 23 08:59:33 <MYHOSTNAME> postfix/smtp[26944]: 3DB2B1456149: to=<<RECIPIENT>@chello.nl>, relay=smtp.chello.nl[213.46.255.2]:25, delay=2, delays=0.02/0.01/1.9/0.04, dsn=5.1.0, status=bounced (host smtp.chello.nl[213.46.255.2] said: 550 5.1.0 Dynamic/Generic hostnames are blocked. Please contact your Email Provider. Your IP was <MY IP>. Your hostname was ??. (in reply to MAIL FROM command)) What could be the cause of this? I did an SMTP check on mxtools.com and got the following: OK - Not an open relay OK - 0 seconds - Good on Connection time OK - 1.482 seconds - Good on Transaction time OK - 83.161.xx.xx resolves to a83-161-xx-xx.xxx.xxx.nl WARNING - Reverse DNS does not match SMTP Banner Update: My IP is static.

    Read the article

  • IPtables AWS EC2 NAT/Reverse NAT - For Reverse Proxy style setup but with IPtables

    - by Mark
    I was thinking initially needing to do a reverse proxy or something so I could get some SSL/TLS traffic look like it is being terminated at a server and IP address in the AWS cloud, and then that traffic is forwarded onto our actual web servers that aren't in the cloud... I've not done much iptables pre and post routing before Dnat or Snat which I know are the things I need or a combination of the things I need in order achieve what i'm trying. Things to note:- Client/User - Must not be able to see backend IP address and only see the IP address of the cloud box https (TLS/SSL) - connection shouldn't be terminated at the cloud box, it should act like a router almost EC2 instance - Has only one network interface available to play with... this is thus an (internet <- internet) type of routing going on. EC2 instance IP address is already more or less behind a NAT that I have no control over, for example... Public ip address could be 46.1.1.1 but instance IP will be 10.1.1.1. Connections from client will go to 46.1.1.1 which will end up at the instance and on interface 10.1.1.1. The connection from the client then needs to be forwarded (DNAT) onto the backend web servers which are back out on the internet (SNAT). Possibly a part of the problem could be that the SNAT will need to be set to the external interface of the instance and I wonder if this makes it harder for IPtables to track the connection? So looking to basically, have it look as though connections are terminating at this server and its IP address. Whereas all that's really happening is the https request and connection is being forwarded straight onto another internet facing web server. How possible does that sound?

    Read the article

  • Word 2007 crashes on Server 2008 R2 terminal services

    - by John Rennie
    We are finding that Word 2007 (with SP2) crashes when used on a Windows 2008 R2 terminal server. Typically it crashes when you click File/Open or File/Save, but not every time. Maybe one time in four, and just to be really confusing, on a test server in my office I can't make it crash. I have just today set up a brand new shiny 2k8 R2 terminal server with as simple a setup as possible, e.g. no anti-virus to confuse things, and we're still seeing crashes. My question is has anyone else seen this, and if so any clues on what's happening? We have a support case open with Microsoft, and the MS support engineer has conceded it's happening, but has so far been unable to find the reason. On possible factor is that all the 2k8 R2 terminal servers I've seen this on have been Hyper-V VMs (running on a 2k8 R2 host). I'm about to put in a physical 2k8 R2 terminal server at the customer where we're seeing the most crashes, in case this is relevant. More news soon. Sorry if this posting seems a bit vague, but this has just bitten us and is causing a lot of pain and sleepless nights :-( If anyone can help I'll be enormously grateful! Update: we've given up and gone back to 2008 pre-R2. Both Office 2003 and 2007 both work fine now. I think there are some problems with TS in R2. Googling doesn't find much, so I thought it was just me. It's reassuring to find that someone else has seen the same problem.

    Read the article

  • Virtual SMTP not sending mails

    - by DoStuffZ
    Hi I have been googling for the better part of the last two hours without finding any conclusion. My mails are not being sent from the production webserver. If I stop/start the Virtual SMTP I get this in the event log: No usable TLS server certificate for SMTP virtual server instance '1' could be found. TLS will be disabled for this virtual-server. We recently updated the webapplication running and I assume something went amiss during that. Googling the message straight up gave me a list that just as well could have been in greek. I found a security certificate on the server, installing that gave no change. I basicly played russian roulette with the certificate file (.cer), though I was somewhat certain it would not have a negative effect. (Russian roulette with a 6 chamber gun and 2 bullets.) I found a .pfx in our local documentation folder, though I'm far from certain that to have a positive effect. (6 chambers and 5 bullets). I found a site describing how the Virtual SMTP - Properties - Access - should have a button saying Certificates. I have a text saying "Did not find any TLS certificates" and a grayed out tick box saying "Require TLS certificate". I found the TLS being SSL ver3.1+ (3.1-3.3). So question goes - How do I enable the SMTP to once again send emails, like before.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >