Search Results

Search found 8998 results on 360 pages for 'upgrade failure'.

Page 104/360 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Get Proactive: automatischer Support bietet Vorteile

    - by A&C Redaktion
    „Proaktiv“, das bedeutet soviel wie: handeln statt abwarten, Initiative statt Reaktion. So möchte auch die Aktion „Get Proactive“ für Oracle Premier Support Kunden einen vorausschauenden, offensiven Umgang mit Support-Fällen fördern. Die automatisierte Unterstützung der Systeme, die Oracle Partner und Kunden einen deutlichen Vorsprung vor der Konkurrenz verschaffen kann, umfasst drei Bereiche: Sie heißen Prevent, Resolve und Upgrade. „Prevent“ umfasst alle Maßnahmen der Vorsorge: Deren Ziel ist es, ein mögliches Problem aufzudecken und zu lösen, noch bevor es es sich negativ auswirkt. So können beispielsweise produktbezogene Security Alerts zugeschickt werden, ebenso auf das jeweilige System zugeschnittene Patch-Empfehlungen und Risiko-Warnungen. „Resolve“ steht für den Anspruch, auftretende Probleme schneller und zielgerichtet zu lösen. Notwendig sind dafür die passenden Diagnosetools und -maßnahmen. Spezifische Informationen für individuelle Systeme stehen im Product Information Center zur Verfügung. Zudem helfen Auto-Detect-Werkzeuge dabei, Lösungen für bekannte Probleme zu finden. Wertvolle Hinweise bieten auch die Partner und User in der Online Support Community und natürlich die umfangreiche Wissensbasis in MOS. „Upgrade“ bündelt, wie der Name schon sagt, Schritte zur Risikominimierung durch Unterstützung beim Upgrade. Jeder kann dabei selbst die jeweilige Umgebung auf zertifizierte Produkte prüfen. Tipps und Tricks verrät der Upgrade Advisor mit Best Practices für verschiedenste Produkte, Prozesse und Versionen. Der Patch- und Upgrade-Plan erleichtert die Systemupgrade-Planung. Detaillierte Informationen finden Sie auf den Oracle-Support-Webseiten – geben Sie einfach „Get Proactive“ in die Suchmaske ein.

    Read the article

  • Get Proactive: automatischer Support bietet Vorteile

    - by A&C Redaktion
    „Proaktiv“, das bedeutet soviel wie: handeln statt abwarten, Initiative statt Reaktion. So möchte auch die Aktion „Get Proactive“ für Oracle Premier Support Kunden einen vorausschauenden, offensiven Umgang mit Support-Fällen fördern. Die automatisierte Unterstützung der Systeme, die Oracle Partner und Kunden einen deutlichen Vorsprung vor der Konkurrenz verschaffen kann, umfasst drei Bereiche: Sie heißen Prevent, Resolve und Upgrade. „Prevent“ umfasst alle Maßnahmen der Vorsorge: Deren Ziel ist es, ein mögliches Problem aufzudecken und zu lösen, noch bevor es es sich negativ auswirkt. So können beispielsweise produktbezogene Security Alerts zugeschickt werden, ebenso auf das jeweilige System zugeschnittene Patch-Empfehlungen und Risiko-Warnungen. „Resolve“ steht für den Anspruch, auftretende Probleme schneller und zielgerichtet zu lösen. Notwendig sind dafür die passenden Diagnosetools und -maßnahmen. Spezifische Informationen für individuelle Systeme stehen im Product Information Center zur Verfügung. Zudem helfen Auto-Detect-Werkzeuge dabei, Lösungen für bekannte Probleme zu finden. Wertvolle Hinweise bieten auch die Partner und User in der Online Support Community und natürlich die umfangreiche Wissensbasis in MOS. „Upgrade“ bündelt, wie der Name schon sagt, Schritte zur Risikominimierung durch Unterstützung beim Upgrade. Jeder kann dabei selbst die jeweilige Umgebung auf zertifizierte Produkte prüfen. Tipps und Tricks verrät der Upgrade Advisor mit Best Practices für verschiedenste Produkte, Prozesse und Versionen. Der Patch- und Upgrade-Plan erleichtert die Systemupgrade-Planung. Detaillierte Informationen finden Sie auf den Oracle-Support-Webseiten – geben Sie einfach „Get Proactive“ in die Suchmaske ein.

    Read the article

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Oracle E-Business Suite 12 Certified on Additional Linux Platforms

    - by John Abraham
    As a follow up to our original certification announcement regarding Oracle Linux 6, Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on the following additional Linux x86/x86-64 operating systems: Oracle Linux 6 (32-bit) Red Hat Enterprise Linux 6 (32-bit) Red Hat Enterprise Linux 6 (64-bit) Novell SUSE Linux Enterprise Server (SLES) version 11 (64-bit) New installations of the E-Business Suite on these operating systems require version 12.1.1 of the Release 12 media.  Cloning of existing 12.1 Linux environments to this new OS is also certified using the standard Rapid Clone process. There are specific requirements to upgrade technology components such as the Oracle Database (to 11gR2) and Fusion Middleware as necessary. These and other requirements are noted in the Installation and Upgrade Notes (IUN) below. References Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86-64 (My Oracle Support Document 761566.1) Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86 (My Oracle Support Document 761564.1) Cloning Oracle Applications Release 12 with Rapid Clone (My Oracle Support Document 406982.1) Interoperability Notes Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (My Oracle Support Document 1058763.1) Oracle Linux website

    Read the article

  • Implementing a Mutex Lock in C

    - by Adam
    I'm trying to make a really mutex in C and for some reason I'm getting cases where two threads are getting the lock at the same time, which shouldn't be possible. Any ideas why it's not working? void mutexLock(mutex_t *mutexlock, pid_t owner) { int failure; while(mutexlock->mx_state == 0 || failure || mutexlock->mx_owner != owner) { failure = 1; if (mutexlock->mx_state == 0) { asm( "test:" "movl $0x01,%%eax\n\t" // move 1 to eax "xchg %%eax,%0\n\t" // try to set the lock bit "mov %%eax,%1\n\t" // export our result to a test var "test %%eax,%%eax\n\t" "jnz test\n\t" :"=r"(mutexlock->mx_state),"=r"(failure) :"r"(mutexlock->mx_state) :"%eax" ); } if (failure == 0) { mutexlock->mx_owner = owner; //test to see if we got the lock bit } } }

    Read the article

  • Mixed Mode C++ DLL function call failure when app launched from network share. Called from unmanage

    - by Steve
    Mixed-mode DLL called from native C application fails to load: An unhandled exception of type 'System.IO.FileLoadException' occurred in Unknown Module. Additional information: Could not load file or assembly 'XXSharePoint, Version=0.0.0.0, Culture=neutral, PublicKeyToken=e0fbc95fd73fff47' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) My environment is: Native C application calling a mixed mode C++ DLL, which then loads a C# DLL.. This works correctly when loaded from a local drive, but when launched from a network drive, it fails with the above messages. The call to LoadLibrary succeeds, as does the GetProcAddress. The load error happens when I call the function. I have digitally signed the C application, and I've performed "strong name" signing on the 2 DLLs. The PublickKeyToken in the message above does match the named DLL. I have also issued the CASPOLcommands on my client to grant FullTrust to that strong name keytoken. When that failed to work, I tried the CASPOL command to grant FullTrust to the URL of the network drive (including path to my application's directory); no change in results. I tried removing all dependencies, so that there was just the initial mixed-mode DLL... I replaced the bodies of all the functions with just a return of a "success" integer value. Results unchanged. Only when I changed it from Mixed Mode to Win32, and changed the Configuration Properties General Common Language Runtime Support from "Common Language Runtime Support" to "No Common Language Runtime Support" did calling the DLL produce the expected result (just returned the "success" integer return value).

    Read the article

  • Boggling Direct3D9 dynamic vertex buffer Lock crash/post-lock failure on Intel GMA X3100.

    - by nj
    Hi, For starters I'm a fairly seasoned graphics programmer but as wel all know, everyone makes mistakes. Unfortunately the codebase is a bit too large to start throwing sensible snippets here and re-creating the whole situation in an isolated CPP/codebase is too tall an order -- for which I am sorry, do not have the time. I'll do my best to explain. B.t.w, I will of course supply specific pieces of code if someone wonders how I'm handling this-or-that! As with all resources in the D3DPOOL_DEFAULT pool, when the device context is taken away from you you'll sooner or later will have to reset your resources. I've built a mechanism to handle this for all relevant resources that's been working for years; but that fact nothingwithstanding I've of course checked, asserted and doubted any assumption since this bug came to light. What happens is as follows: I have a rather large dynamic vertex buffer, exact size 18874368 bytes. This buffer is locked (and discarded fully using the D3DLOCK_DISCARD flag) each frame prior to generating dynamic geometry (isosurface-related, f.y.i) to it. This works fine, until, of course, I start to reset. It might take 1 time, it might take 2 or it might take 5 resets to set off a bug that causes an access violation either on the pointer returned by the Lock() operation on the renewed resource or a plain crash -- regarding a somewhat similar address, but without the offset that it has tacked on to it in the first case because in that case we're somewhere halfway writing -- iside the D3D9 dll Lock() call. I've tested this on other hardware, upgraded my GMA X3100 drivers (using a MacBook with BootCamp) to the latest ones, but I can't reproduce it on any other machine and I'm at a loss about what's wrong here. I have tried to reproduce a similar situation with a similar buffer (I've got a large scratch pad of the same type I filled with quads) and beyond a certain amount of bytes it started to behave likewise. I'm not asking for a solution here but I'm very interested if there are other developers here who have battled with the same foe or maybe some who can point me in some insightful direction, maybe ask some questions that might shed a light on what I may or may not be overlooking. Another interesting artifact is that the vertex buffer starts to bug if I supply both D3DLOCK_DISCARD and D3DLOCK_NOOVERWRITE together which, even though not very logical (you're not going to overwrite if you've just discarded all), gives graphics glitches. Thanks and any corrections are more than welcome. Niels p.s - A friend of mine raised the valid point that it is a huge buffer for onboard video RAM and it's being at least double or triple buffered internally due to it's dynamic nature. On the other hand, the debug output (D3D9 debug DLL + max. warning output) remains silent. p.s 2 - Had it tested on more machines and still works -- it's probably a matter of circumstance: the huge dynamic, internally double/trippled buffered buffer, not a lot of memory and drivers that don't complain when they should.. Unless someone has a better suggestion; I'd still love to hear it :)

    Read the article

  • Gnome Classic U 11.04 trash not working, can't find my trash dir

    - by Bruce Salem
    I did an upgrade from U 10.10 to U 11.04. I can't upgrade further because Upgrade Manager says that there are unsupported packages. How do I find them? A more immediate problem is that trash stopped working even though "/.share/local/Trash is getting files removed by the file manager. The trash icon on the lower panel fails after having done the upgrade from Gnome 2 to Unity and using Gnome Classic, says "No Such File or Directory". File Operations says it is looking at "/". How do I reconfigure the trash icon to use my local trash dir? The trash dir is there, has the right permissons, I can "rm" the dir tree there, and recreate it by moving files to the trash from the file manager.

    Read the article

  • Upgrading/Installing Demantra 7.3.1.1? Check this out!

    - by user702295
    Here is a summary for relase 7.3.1.1 install/upgrade/features Data Preservation Setting for General Levels  Deploying Demantra Application Server 10g  Important upgrade Information  Known upgrade issues  Mozilla Firefox Browser  Installer Issues  Reviewing / Simulating General Level Data Such as CTO Base Model Demand  Failure Rate Calculation  Demantra SSL Client Authentication and Java 6  CTO functionality does not work in release 7.3.1.1 after upgrading from 7.3.0 using the ‘Platform Upgrade Only’ option.  User Privileges and Export Worksheet to Excell  Cookie Attribute Causes Logging Issue in Worksheet  List of bugs fixed in 7.3.1.1 See the following for details. Demantra 7.3.1.1 Install / Upgrade Known Issues, Notes, Guidance, Defects, Workarounds (Doc ID 1370518.1) Related Documents For Demantra Version 7.3.1.1 And If Demantra Supports The Required Stacks (Doc ID 1367141.1)

    Read the article

  • Migrating from GlassFish 2.x to 3.1.x

    - by alexismp
    With clustering now available in GlassFish since version 3.1 (our Spring 2011 release), a good number of folks have been looking at migrating their existing GlassFish 2.x-based clustered environments to a more recent version to take advantage of Java EE 6, our modular design, improved SSH-based provisioning and enhanced HA performance. The GlassFish documentation set is quite extensive and has a dedicated Upgrade Guide. It obviously lists a number of small changes such as file layout on disk (mostly due to modularity), some option changes (grizzly, shoal), the removal of node agents (using SSH instead), new JPA default provider name, etc... There is even a migration tool (glassfish/bin/asupgrade) to upgrade existing domains. But really the only thing you need to know is that each module in GlassFish 3 and beyond is responsible for doing its part of the upgrade job which means that the migration is as simple as copying a 2.x domain directory to the domains/ directory and starting the server with asadmin start-domain --upgrade. Binary-compatible products eligible for such upgrades include Sun Java System Application Server 9.1 Update 2 as well as version 2.1 and 2.1.1 of Sun GlassFish Enterprise Server.

    Read the article

  • What do I name this class whose sole purpose is to report failure?

    - by Blair Holloway
    In our system, we have a number of classes whose construction must happen asynchronously. We wrap the construction process in another class that derives from an IConstructor class: class IConstructor { public: virtual void Update() = 0; virtual Status GetStatus() = 0; virtual int GetLastError() = 0; }; There's an issue with the design of the current system - the functions that create the IConstructor-derived classes are often doing additional work which can also fail. At that point, instead of getting a constructor which can be queried for an error, a NULL pointer is returned. Restructuring the code to avoid this is possible, but time-consuming. In the meantime, I decided to create a constructor class which we create and return in case of error, instead of a NULL pointer: class FailedConstructor : public IConstructor public: virtual void Update() {} virtual Status GetStatus() { return STATUS_ERROR; } virtual int GetLastError() { return m_errorCode; } private: int m_errorCode; }; All of the above this the setup for a mundane question: what do I name the FailedConstructor class? In our current system, FailedConstructor would indicate "a class which constructs an instance of Failed", not "a class which represents a failed attempt to construct another class". I feel like it should be named for one of the design patterns, like Proxy or Adapter, but I'm not sure which.

    Read the article

  • How to recover after a merge failure in TFS 2008?

    - by steve_d
    We recently attempted a large, "cherry picked" merge. First we did a full merge from one child development branch into the parent Main branch, then did a full merge of the Main branch into another child development branch, then we attempted to do a cherry pick merge from the second Development branch back into merge. There were many checkins, including renames and deletes; and when it wasn't working the person who was doing it did a bunch of TFPT rollbacks. What options do we have to recover here? Things like baseless, force, etc merge? Roll back to a point in time and somehow, try again?

    Read the article

  • Oracle Database 11g bevet&eacute;s k&ouml;zben: Val&oacute;s felhaszn&aacute;l&oacute;i tapasztalatok

    - by Lajos Sárecz
    Tavaly tartott Magyarországon is Upgrade Workshop-ot Mike Dietrich, aki az alábbi videóban néhány érdekes ügyfél sztorit oszt meg azzal kapcsolatban, miért érdemes Oracle Database 11g-re váltani, milyen elonyei származtak azoknak az ügyfeleknek, akik már túl vannak az upgrade folyamatán. Ha nem elég meggyozoek a külföldi példák, akkor 3 hét múlva a HOUG Konferencia “Korszeru adatközpontok” szekciójában magyar ügyfelek 11g upgrade történetei is megismerhetok lesznek.

    Read the article

  • With NHibernate and Transaction do I rollback on commit failure or does it auto rollback on single c

    - by mattcodes
    I've built the following Dispose method for my Unit Of Work which essentially wraps the active NH session & transaction (transaction set as variable after opening session as to not be replaced if NH session gets new transaction after error) public void Dispose() { Func<ITransaction,bool> transactionStateOkayFunc = trans => trans != null && trans.IsActive && !trans.WasRolledBack; try { if(transactionStateOkayFunc(this.transaction)) { if (HasErrored) { transaction.Rollback(); } else { try { transaction.Commit(); } catch (Exception) { if(transactionStateOkayFunc(transaction)) transaction.Rollback(); throw; } } } } finally { if(transaction != null) transaction.Dispose(); if(session.IsOpen) session.Close(); } I can't help feeling that code is a little bloated, will a transaction automatically rollback is a discrete Commit fails in the case of non-nested transactions? Will Commit or Rollback automatically Dipose the transaction? If not will Session.Close() automatically dispose the associated transaction?

    Read the article

  • MS Access 2003 - Failure to create MDE file: error VBA is corrupt?

    - by Justin
    Ok so this is a brand new snag I have run into. I am trying to launch a new MDE from my source MDB file, and it is locking up Access. So in my mdb, I am first compacting and repairing, and then selecting create a new mde (just as I have done many times before). It looks like it is starting the process, but never gets to where it compacts when it is done, and access is not responding. So after I force close the app, I look in the folder where I am trying to create the MDE to and I see there is a new access db1 file there. If I try to open that it gives me an error that says file not found, and then it says the Visual Basic for Applications is corrupt. The thing is, I just made a very simple adjustment to the code since last launching an mde, and after this I double and triple checked it...its not that because its just a simple open this form and close this one addition. I did however have my source mdb file on a disc that I copied to my laptop, and then tried to re link the tables to the network drive (had them linked to other tables on my local drive so that I could develop offline)?? PLEASE HELP!!!

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • How to SET TIMING ON for parallel upgrades to 12c?

    - by Mike Dietrich
    Have you asked yourself how to get timings in an Oracle Database 12c upgrade for all statements? When you run the parallel upgrade via catctl.pl, the parallel upgrade Perl driving script in Oracle Database 12c, you may also want to get timings written in your logfile during execution. As catctl.pl does not offer an option yet the best way to achieve this is to edit the catupses.sql script in $ORACLE/rdbms/admin as this script will get called all time over and over again throughout all steps of theupgrade run. Just add these lines marked in RED to catupses.sql and start your upgrade: Rem =============================================Rem Call Common session settingsRem =============================================@@catpses.sql Rem =============================================Rem  Set Timing On during the UpgradeRem =============================================SET TIMING ON; Rem =============================================Rem Turn off PL/SQL event used by APPSRem =============================================ALTER SESSION SET EVENTS='10933 trace name context off'; -Mike PS: This may become the default in a future patch set

    Read the article

  • Installing Ubuntu before or after upgrading from Vista to Win 7?

    - by andresmh
    I just got a new SSD hard drive for my thinkpad laptop. I just installed Vista with the factory CDs. On my old OS, my main OS was Ubuntu but I do want to keep Windows on a separate partition as a dual booth system. I definitely want to upgrade to Win 7 though and I will get it in a few days. My question is: should I install Ubuntu now and then upgrade to Win 7 in a few days? or is that going to mess up with the grub (or something else)? If that is the case, then I'd rather wait to install Ubuntu until after I upgrade to Vista. P.S. I know that probably any kind of mess done by the Win upgrade could be fixed, but I just want to avoid wasting time.

    Read the article

  • Scala type inference failure on "? extends" in Java code

    - by oxbow_lakes
    I have the following simple Java code: package testj; import java.util.*; public class Query<T> { private static List<Object> l = Arrays.<Object>asList(1, "Hello", 3.0); private final Class<? extends T> clazz; public static Query<Object> newQuery() { return new Query<Object>(Object.class); } public Query(Class<? extends T> clazz) { this.clazz = clazz; } public <S extends T> Query<S> refine(Class<? extends S> clazz) { return new Query<S>(clazz); } public List<T> run() { List<T> r = new LinkedList<T>(); for (Object o : l) { if (clazz.isInstance(o)) r.add(clazz.cast(o)); } return r; } } I can call this from Java as follows: Query<String> sq = Query.newQuery().refine(String.class); //NOTE NO <String> But if I try and do the same from Scala: val sq = Query.newQuery().refine(classOf[String]) I get the following error: error: type mismatch found :lang.this.class[scala.this.Predef.String] required: lang.this.class[?0] forSome{ type ?0 <: ? } val sq = Query.newQuery().refine(classOf[String]) This is only fixed by the insertion of the correct type parameter! val sq = Query.newQuery().refine[String](classOf[String]) Why can't scala infer this from my argument? Note I am using Scala 2.7

    Read the article

  • youtube - video upload failure - unable to convert file - encoding the video wrong?

    - by Anthony
    I am using .NET to create a video uploading application. Although it's communicating with YouTube and uploading the file, the processing of that file fails. YouTube gives me the error message, "Upload failed (unable to convert video file)." This supposedly means that "your video is in a format that our converters don't recognize..." I have made attempts with two different videos, both of which upload and process fine when I do it manually. So I suspect that my code is a.) not encoding the video properly and/or b.) not sending my API request properly. Below is how I am constructing my API PUT request and encoding the video: Any suggestions on what the error could be would be appreciated. Thanks P.S. I'm not using the client library because my application will use the resumable upload feature. Thus, I am manually constructing my API requests. Documentation: http://code.google.com/intl/ja/apis/youtube/2.0/developers_guide_protocol_resumable_uploads.html#Uploading_the_Video_File Code: // new PUT request for sending video WebRequest putRequest = WebRequest.Create(uploadURL); // set properties putRequest.Method = "PUT"; putRequest.ContentType = getMIME(file); //the MIME type of the uploaded video file //encode video byte[] videoInBytes = encodeVideo(file); public static byte[] encodeVideo(string video) { try { byte[] fileInBytes = File.ReadAllBytes(video); Console.WriteLine("\nSize of byte array containing " + video + ": " + fileInBytes.Length); return fileInBytes; } catch (Exception e) { Console.WriteLine("\nException: " + e.Message + "\nReturning an empty byte array"); byte [] empty = new byte[0]; return empty; } }//encodeVideo //encode custom headers in a byte array byte[] PUTbytes = encode(putRequest.Headers.ToString()); public static byte[] encode(string headers) { ASCIIEncoding encoding = new ASCIIEncoding(); byte[] bytes = encoding.GetBytes(headers); return bytes; }//encode //entire request contains headers + binary video data putRequest.ContentLength = PUTbytes.Length + videoInBytes.Length; //send request - correct? sendRequest(putRequest, PUTbytes); sendRequest(putRequest, videoInBytes); public static void sendRequest(WebRequest request, byte[] encoding) { Stream stream = request.GetRequestStream(); // The GetRequestStream method returns a stream to use to send data for the HttpWebRequest. try { stream.Write(encoding, 0, encoding.Length); } catch (Exception e) { Console.WriteLine("\nException writing stream: " + e.Message); } }//sendRequest

    Read the article

  • Migration a database from 32bit to 64bit

    - by Mike Dietrich
    Database migrations from an 32bit environment to an 64bit environment keeping the same platform architecture (e.g. moving an Oracle 10.2.0.5 database from MS Windows XP 32bit to MS Windows Server 2003 64bit) does not happen that often anymore. But still we see them getting done. And there are a few things to note when doing such a move. First of all the important question is:Will you upgrade your database as part of this move - Yes or No? If you say "Yes" then you are almost done with that topic as we will take care of that bitnes move during the upgrade. The only thing you have to take care is OLAP in case you are using OLAP Option with Analytic Workspaces (AW) by yourself. Those store data in Binary LOBs - and in order to move AWs from 32bit to 64bit you have to export your AWs prior to the move - and import them later on. People who don't use OLAP don't have to take care on this. But if you say "No" (meaning: no upgrade actions involved - you keep your database version) then you have to make sure to invalidate all packages and stored code in the database before you shutdown your database in the 32bit environment and prior to moving it over. And the same rule as above for OLAP applies once you use the OLAP Option. In the source environment: startup upgrade;    -- [or startup migrate; -- for Oracle 9i] @?/rdbms/admin/utlirp.sqlshutdown immediate In the destination environment: startup upgrade @?/olap/admin/xumuts.plb --Only if OLAP Option is installed@?/rdbms/admin/utlrp.sql The script utlirp.sql will invalidate all packages and stored code, utlrp.sql will recompile - and xumuts.plb will rebuild the OLAP Analytic Workspaces in case you have the OLAP Option installed.

    Read the article

  • High Availability

    - by mattjgilbert
    Udi Dahan presented at the UK Connected Systems User Group last night. He discussed High Availability and pointed out that people often think this is purely an infrastructure challenge. However, the implications of system crashes, errors and resulting data loss need to be considered and managed by software developers. In addition a system should remain both highly reliable (backwardly compatible) and available during deployments and upgrades. The argument is that you cannot be considered highly available if your system is always down every time you upgrade. For our recent BizTalk 2009 upgrade we made use of our Business Continuity servers (note the name, rather than calling them Disaster Recovery servers ? ) to ensure our clients could continue to operate while we upgraded the Production BizTalk servers. Then we failed back to the newly built 2009 environment and rebuilt the BC servers. Of course, in the event of an actual disaster there was a window where either one or the other set were not available to take over – however, our Staging machines were already primed to switch to production settings, having been used for testing the upgrade in the first place.   While not perfect (the failover between environments was not automatic and without some minimal outage) planning the upgrade in this way meant BizTalk was online during the rebuild and upgrade project, we didn’t have to rush things to get back on-line and planning meant we were ready to be as available as we could be in the event of an actual disaster.

    Read the article

  • Lost support for Web Access on Verizon BlackBerry World Edition

    - by Jimsmithkka
    Hello all, I believe that some silliness has occurred with my blackberry after a OS upgrade. I have 2010 Blackberry world edition phone, purchased off a friend who went iPhone, that at first worked with web on the Verizon network. When i connected it to my PC to transfer contacts, it prompted for an OS upgrade, which I performed. Post-Upgrade I have found that i can no longer access any of the web services: eg. AppWorld, Email, Twitter, Browser. And they all state that i need to upgrade my account to gain access. I had a Storm previous to this that worked fine, and at the VZ store they told me this device is no longer supported (new in 2010 though), and they got me a free "upgrade" to the Blackberry flip. What i could use help with is finding a source stating it is discontinued or a guide that will help me re-enable the web features. I can provide further info later if needed (currently at work with the flip, the WorldEdition is at home).

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >