Search Results

Search found 30896 results on 1236 pages for 'best buy'.

Page 33/1236 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Studying computer science - what am I getting myself into?

    - by clankercrusher
    I'm a student considering the possibility of studying computer science. I've picked up programming indie games and websites as a hobby and I really enjoy it. Despite my fairly positive experience, I somehow get the feeling that computer science in the business world will be completely different than do-it-for-fun game making. Since I'm interested in the field and I'd like to study well, I want to prepare myself for the onslaught. (If that’s even possible) What are some of the most important principals I need to know if I decide to study computer science? What will I need to know about computer science that a University probably won't teach me? Is there any way I can get hands on experience before or while I'm at a University? What am I getting myself into? P.S. Is this the right stack exchange site for this type of question?

    Read the article

  • Web Usage Policies at "Best Companies to Work For"

    - by Greg
    For any company that has made it onto a "Best Companies to Work For" list* in 2010 that hires programmers, what restrictions on web usage do they have in place? Does that company view unrestricted internet access as a benefit to their employees even if they might use it for non-work related reasons? If you can provide a link to back up this information, even better. (* I don't want to promote any specific publication, but they are easy enough to find.) My hypothesis is that a company that aspires to be one of the best needs to provide fairly unrestricted internet access and I'm trying to test it.

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Guaranteed Restore Points as Fallback Method

    - by Mike Dietrich
    Thanks to the great audience yesterday in the Upgrade & Migration Workshop in Utrecht. That was really fun and I was amazed by our new facilities (and the  "wellness" lights surrounding the plenum room's walls). And another reason why I like to do these workshops is that often I learn new things from you So credits here to Rick van  Ek who has highlighted the following topic to me. Yesterday (and in some previous workshops) I did mention during the discussion about Fallback Strategies that you'll have to switch on Flashback Database beforehand to create a guaranteed restore point in case you'll encounter an issue during the database upgrade. I knew that we've made it possible since Oracle Database 11.2 to switch Flashback Database on without taking the database into MOUNT status (you could switch it off anyway while the database is open before in all releases). But before Oracle Database 11.2 that did require MOUNT status. SQL> create restore point rp1 guarantee flashback database ; create restore point rp1 guarantee flashback database * ERROR at line 1: ORA-38784: Cannot create restore point 'RP1'. ORA-38787: Creating the first guaranteed restore point requires mount mode when flashback database is off. But Rick did mention that I won't need to switch Flashback Database On to create a guaranteed restore point. And he's right - in older releases I would have had to go into MOUNT state to define the restore point which meant to restart the database. But in 11.2 that's no necessary anymore. And the same will apply when you upgrade your pre-11.2 database (e.g. an Oracle Database 10.2.0.4) to Oracle Database 11.2. As soon as you start your "old" not-yet-upgraded database in your 11.2 environment with STARTUP UPGRADE you can define a guaranteed restore point. If you tail the alert.log you'll see that the database will start the RVWR (Recovery Writer) background process - you'll just have to make sure that you'd define the values for db_recovery_file_dest_size and db_recovery_file_dest. SQL> startup upgrade ORACLE instance started. Total System Global Area  417546240 bytes Fixed Size                  2228944 bytes Variable Size             134221104 bytes Database Buffers          272629760 bytes Redo Buffers                8466432 bytes Database mounted. Database opened. SQL> create restore point grpt guarantee flashback database; Restore point created.SQL> drop restore point grpt; And don't forget to drop that restore point the sooner or later as it is guaranteed - and will fill up your Fast Recovery Area pretty quickly Just on the side: in any case archivelog mode is required if you'd like to work with restore points. - Mike

    Read the article

  • Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits

    - by Anand Akela
    Earlier this month at the Oracle Open World 2012, we celebrated the first anniversary of Oracle Enterprise Manager 12c . Early adopters of  Oracle Enterprise manager 12c have benefited from its federated self-service access to complete application stacks, automated provisioning, elastic scalability, metering, and charge-back capabilities. Crimson Consulting Group recently interviewed multiple early adopters of Oracle Enterprise Manager 12c and captured their finding in a white Paper "Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains".  Here is summary of the finding :- On October 25th at 10 AM pacific time, Kirk Bangstad from the Crimson Consulting group will join us in a live webcast and share what learnt from the early adopters of Oracle Enterprise Manager 12c. Don't miss this chance to hear how private clouds could impact your business and ask questions from our experts. Webcast: Real-World Benefits of Private Cloud Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits Date: Thursday, October 25, 2012 Time: 10:00 AM PDT | 1:00 PM EDT Register Today All attendees will receive the White Paper: Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains. Stay Connected Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement?

    - by rraallvv
    I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps/s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update: The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values.

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How should I study programming languages?

    - by gcc
    I am a student of computer engineering. I have never done any programming before, and as you can understand, I don't know how to study it or how to make my own programs. My English is weak [edited for clarity - ed], and so if you don't like the choices I list, please feel free to provide others. How should I study? How should I learn programming languages? Study completely from a book. Don't study from a book, just try writing code. A mix of the two; study from a book, then try writing code. Study half the book, then write the code by hand on paper. Listed to the teacher, then try to solve general problems (those not from any specific chapter).

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • Financial institutions build predictive models using Oracle R Enterprise to speed model deployment

    - by Mark Hornick
    See the Oracle press release, Financial Institutions Leverage Metadata Driven Modeling Capability Built on the Oracle R Enterprise Platform to Accelerate Model Deployment and Streamline Governance for a description where a "unified environment for analytics data management and model lifecycle management brings the power and flexibility of the open source R statistical platform, delivered via the in-database Oracle R Enterprise engine to support open standards compliance." Through its integration with Oracle R Enterprise, Oracle Financial Services Analytical Applications provides "productivity, management, and governance benefits to financial institutions, including the ability to: Centrally manage and control models in a single, enterprise model repository, allowing for consistent management and application of security and IT governance policies across enterprise assets Reuse models and rapidly integrate with applications by exposing models as services Accelerate development with seeded models and common modeling and statistical techniques available out-of-the-box Cut risk and speed model deployment by testing and tuning models with production data while working within a safe sandbox Support compliance with regulatory requirements by carrying out comprehensive stress testing, which captures the effects of adverse risk events that are not estimated by standard statistical and business models. This approach supplements the modeling process and supports compliance with the Pillar I and the Internal Capital Adequacy Assessment Process stress testing requirements of the Basel II Accord Improve performance by deploying and running models co-resident with data. Oracle R Enterprise engines run in database, virtually eliminating the need to move data to and from client machines, thereby reducing latency and improving security"

    Read the article

  • Best practice Java - String array constant and indexing it

    - by Pramod
    For string constants its usual to use a class with final String values. But whats the best practice for storing string array. I want to store different categories in a constant array and everytime a category has been selected, I want to know which category it belongs to and process based on that. Addition : To make it more clear, I have a categories A,B,C,D,E which is a constant array. Whenever a user clicks one of the items(button will have those texts) I should know which item was clicked and do processing on that. I can define an enum(say cat) and everytime do if clickedItem == cat.A .... else if clickedItem = cat.B .... else if .... or even register listeners for each item seperately. But I wanted to know the best practice for doing handling these kind of problems.

    Read the article

  • Summer Upgrade Workshops are Open!

    - by roy.swonger
    The listing of upcoming events is located in the right sidebar of the main blog page, down below the flag counter. If you haven't checked out our schedule lately, you might be surprised at how active we will be with travel this summer. Coming up next week will be upgrade workshops in the USA (St. Louis and Minneapolis) followed by a pair in Canada (Toronto and Montreal) and then two in Europe (Brussels and Utrecht). Make your plans now to attend an upgrade workshop in your area. As you can see from the long list of planned events, it is very likely that Mike or I will be coming to your area sometime soon!

    Read the article

  • Should I use an interface when methods are only similar?

    - by Joshua Harris
    I was posed with the idea of creating an object that checks if a point will collide with a line: public class PointAndLineSegmentCollisionDetector { public void Collides(Point p, LineSegment s) { // ... } } This made me think that if I decided to create a Box object, then I would need a PointAndBoxCollisionDetector and a LineSegmentAndBoxCollisionDetector. I might even realize that I should have a BoxAndBoxCollisionDetector and a LineSegmentAndLineSegmentCollisionDetector. And, when I add new objects that can collide I would need to add even more of these. But, they all have a Collides method, so everything I learned about abstraction is telling me, "Make an interface." public interface CollisionDetector { public void Collides(Spatial s1, Spatial s2); } But now I have a function that only detects some abstract class or interface that is used by Point, LineSegment, Box, etc.. So if I did this then each implementation would have to to a type check to make sure that the types are the appropriate type because the collision algorithm is different for each different type match up. Another solution could be this: public class CollisionDetector { public void Collides(Point p, LineSegment s) { ... } public void Collides(LineSegment s, Box b) { ... } public void Collides(Point p, Box b) { ... } // ... } But, this could end up being a huge class that seems unwieldy, although it would have simplicity in that it is only a bunch of Collide methods. This is similar to C#'s Convert class. Which is nice because it is large, but it is simple to understand how it works. This seems to be the better solution, but I thought I should open it for discussion as a wiki to get other opinions.

    Read the article

  • Oracle Tutor - Is Anyone Reading Your Documentation?

    - by mary.keane
    If you are responsible for documenting your business practices, wouldn't it be nice to know if anyone is using the documentation? If the employees find it useful? You might want to consider surveying the users of the documentation on a regular basis. There are a number of free survey tools online (search for "free survey tools"), and you can have a survey ready in a matter of minutes. It's as simple as gathering a list of questions and a list of email addresses. For the questions, here are some suggestions. How often do you access the policy and procedure site? How useful is the site? How easy is it to navigate the site? How often are your questions answered on the site? What suggestions do you have to make the site more useful? You may want to consider just asking a few questions each month so that employees can complete the survey in less than 5 minutes (you'll get more responses). Make sure you have several comment boxes in the survey so that the employees can give suggestions. As the users of your documentation, the employees may have some terrific ideas that will enhance the usability of your policy and procedure site. It would be great to hear your suggestions for how to survey the users of your documentation. Mary R. Keane Senior Development Manager, Oracle BPM and Tutor

    Read the article

  • Should one comment differently in functional languages

    - by Tom Squires
    I'm just getting started with functional programming and I'm wondering the correct way to comment my code. It seems a little redundant to comment a short function as the names and signature already should tell you everything you need to know. Commenting larger functions also seems a little redundant since they are generally comprised of smaller self-descriptive functions. What is the correct way to comment a functional program? Should I use the same approach as in iterative programming?

    Read the article

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

  • Too many heap subpools might break the upgrade

    - by Mike Dietrich
    Recently one of our new upcoming Oracle Database 11.2 reference customers did upgrade their production database - a huge EBS system - from Oracle 9.2.0.8 to Oracle Database 11.2.0.2. They've tested very well, we've optimized the upgrade process, the recompilation timings etc.  But once the live upgrade was done it did fail in the JAVA component piece with this error: begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then * ORA-29553: classw in use: SYS.javax/mail/folder ORA-06512: at "SYS.INITJVMAUX", line 23 ORA-06512: at line 5 Support diagnosis was pretty quick - and refered to:Bug 10165223 - ORA-29553: class in use: sys.javax/mail/folder during database upgrade But how could this happen? Actually I don't know as we have used the same init.ora setup on test and production. The only difference: the prod system has more CPUs and RAM. Anyway, the bug names as workarounds to either decrease the SGA to less than 1GB or decrease the number of heap subpools to 1. Finally this query did help to diagnose the number of heap subpools: select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1; The result was 2 - so we did run the upgrade now with this parameter set: _kghdsidx_count=1 And finally it did work well. One sad thing:After the upgrade did fail Support did recommend to restore the whole database - which took an additional 3-4 hours. As the ORACLE SERVER component has been already upgraded successfully at the stage where the error did happen it would have been fine to go on with the manual upgrade and start catupgrd.sql script. It would have been detected that the ORACLE SERVER is upgraded already and just picked up the non-upgraded components. The good news:Finally I had one extra slide to add to our workshop presentation

    Read the article

  • Why do we keep using CSV?

    - by Stephen
    Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV, both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Ghost in the machine

    - by GrumpyOldDBA
    Well it does relate to ghosts, in this case dbGhost, http://dbghost.com/    which is what this post is about. Ghost creates databases on the fly, something I personally don’t like too much, which it then compares to a “base” database to produce release scripts. ( The brief description ). As with all things sometimes all is not well and the server is left with a number of ghost created databases so I have to have a job to delete these every night before backups, it’s not difficult to code...(read more)

    Read the article

  • Should all, none, or some overriden methods call Super?

    - by JoJo
    When designing a class, how do you decide when all overridden methods should call super or when none of the overridden methods should call super? Also, is it considered bad practice if your code logic requires a mixture of supered and non-supered methods like the Javascript example below? ChildClass = new Class.create(ParentClass, { /** * @Override */ initialize: function($super) { $super(); this.foo = 99; }, /** * @Override */ methodOne: function($super) { $super(); this.foo++; }, /** * @Override */ methodTwo: function($super) { this.foo--; } }); After delving into the iPhone and Android SDKs, I noticed that super must be called on every overridden method, or else the program will crash because something wouldn't get initialized. When deriving from a template/delegate, none of the methods are supered (obviously). So what exactly are these "je ne sais quoi" qualities that determine whether a all, none, or some overriden methods should call super?

    Read the article

  • Should we consider code language upon design?

    - by Codex73
    Summary This question aims to conclude if an applications usage will be a consideration when deciding upon development language. What factors if any could be considered upon language writing could be taken into context. Application Type: Web Question Of the following popular languages, when should we use one or the other? What factors if any could be considered upon language writing could be taken into context. Languages PHP Ruby Python My initial thought is that language shouldn't be considered as much as framework. Things to consider on framework are scalability, usage, load, portability, modularity and many more. Things to consider on Code Writing maybe cost, framework stability, community, etc.

    Read the article

  • Hosting several HTTP servers on single domain name

    - by Nakilon
    Several people have got a single domain name server.company.com server, where they are now supposed to host their infrastructure or temporal projects, written in different ways even in different programming languages. How do they divide the domain? Split into subdomains: john.server.company.com, kate.server.company.com, etc. This would need a lot of admins' assistance, time, etc. -- there would be no way for John and Kate to do it themselves. Split into url namespaces: server.company.com/john/, server.company.com/kate/, etc. Pro: They now can make a single welcome page at root with any additional info (if they need?) Con: Each server would need to know their namespace string constant, and hrefs like / whould need patching. Split into ports: server.company.com:8080, server.company.com:8081, etc. and make a single :80 welcome page. Pro: They still can make a single welcome page at :80 Con: ??? I would like to know more pros and cons for 2 and 3 solution.

    Read the article

  • How to learn PHP effectively?

    - by Goma
    A dozen of bad tutorials out there that teach you bad habits especially when we speak about PHP. I want to learn how to avoid the things that can lead me to develop inefficient web applications. I like to learn from videos but most videos I've found on the internet are provided by people who do not follow good practices. My second option is to learn from books but I did not find a good book for starters in PHP! It would be very helpful for me if you can tell me about your story in learning PHP, what are things that I should avoid? How to learn about PHP security from the beginning to avoid unlearn something later on?. Please provide links to books, websites that provide high quality video tutorials for PHP, and you tips for a good start!

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >