Search Results

Search found 4808 results on 193 pages for 'reserved instances'.

Page 70/193 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • How do I generate a level randomly?

    - by Charlton Santana
    I am currently hard coding 10 different instances like the code below, but but I'd like to create many more. Instead of having the same layout for the new level, I was wondering if there is anyway to generate a random X value for each block (this will be how far into the level it is). A level 100,000 pixels wide would be good enough but if anyone knows a system to make the level go on and on, I'd like to know that too. This is basically how I define a block now (with irrelevant code removed): block = new Block(R.drawable.block, 400, platformheight); block2 = new Block(R.drawable.block, 600, platformheight); block3 = new Block(R.drawable.block, 750, platformheight); The 400 is the X position, which I'd like to place randomly through the level, the platformheight variable defines the Y position which I don't want to change.

    Read the article

  • Azure Search Preview

    - by Greg Low
    One of the things I’ve been keeping an eye on for quite a while now is the development of the Azure Search system. While it’s not a full replacement for the full-text indexing service in SQL Server on-premises as yet, it’s a really, really good start. Liam Cavanagh, Pablo Castro and the team have done a great job bringing this to the preview stage and I suspect it could be quite popular. I was very impressed by how they incorporated quite a bit of feedback I gave them early on, and I’m sure that others involved would have felt the same. There are two tiers at present. One is a free tier and has shared resources; the other is currently $125/month and has reserved resources. I would like to see another tier between these two, much the same way that Azure websites work. If you have any feedback on this, now would be a good time to make it known. In the meantime, given there is a free tier, there’s no excuse to not get out and try it. You’ll find details of it here: http://azure.microsoft.com/en-us/documentation/services/search/ I’ll be posting more info about this service, and showing examples of it during the upcoming months.

    Read the article

  • Dualboot harddisk encryption

    - by amfcosta
    I have a system with both Ubuntu 11.10 and Windows 7 and I want to encrypt the whole harddisk or at least some of my partitions. My partition table is something like this (the ones marked with * are the ones that need to be encrypted): Windows boot reserved partition *Windows system partition (ntfs) *Windows data partition (ntfs) Ubuntu root partition (ext4) *Ubuntu home partition (ext4) Ubuntu swap As I said I don't need to encrypt the whole disk. What is the best way to accomplish this? Maybe something (TrueCrypt?) where I enter the password before the system boots so that it decrypts the whole hdd? Or maybe individual encryption using Windows-only encryption (for Windows partitions) and Ubuntu home encryption (well, for Ubuntu home partition)? By the way, I almost always use Ubuntu, so it would be nice if I could continue to boot Ubuntu by default but have an option to boot Windows too (like in grub). EDIT: I was thinking of doing this: encrypting ubuntu home with eCryptfs (I think this is used to encrypt home when selected during installation). Encrypting Windows partitions with TrueCrypt. Still having Grub as a bootloader, when I choose ubuntu everything goes as normal (home is decrypted when login in). When I choose windows the TrueCrypt password prompt shows and windows boots.

    Read the article

  • 3+ monitors, nvidia + intel graphics

    - by gozzilli
    I have seen lots of different posts on this issue, but I cannot figure out what to do. I hope we can collect the available information in one place, so that they can be useful for others. Is it possible to have multiple (3+) monitors running on two different graphics cards? I have 1x nVidia GeForce GTX 550 with 2 DVI ports and 1x Intel integrated graphics, with 2 DVI ports. I understand that they would be running on different instances of X servers. Is that correct? Could someone point me in the right direction to start? On Windows it's so simple, there is no additional thing to do other than going in display preferences and activating all 3+ monitors. They can even be laid out alternating one monitor from one graphics card with another monitor from the other card.

    Read the article

  • Ubuntu Desktop does not load

    - by Niklas
    If I login on my Ubuntu 14.04, I get the following desktop: This weird behavior appeared after I executed sudo apt-get update && sudo apt-get upgrade and restarted my computer. Don't know why though. To my Ubuntu I have tried the following (nothing seems to work so far) Fix any broken packages: sudo apt-get update sudo apt-get autoclean sudo apt-get clean sudo apt-get autoremove Locate any broken packages and reinstall them: sudo apt-get install debsums sudo apt-get clean sudo debsums_init sudo debsums -cs sudo apt-get install --reinstall $(sudo dpkg -S $(sudo debsums -c) | cut -d : -f 1 | sort -u) Removing some compiz files: rm -r ~/.cache/compizconfig-1 rm -r ~/.compiz Purging of NVIDIA and installing NVIDIA-prime: sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity sudo apt-get purge nvidia* bumblebee* sudo apt-get install nvidia-prime sudo shutdown -r now Compizconfig Settings Manager: sudo apt-get install compizconfig-settings-manager export DISPLAY=:0 ccsm // Back to UI and enablement of Unity Plugin Unity replace, which stopped at a while and did nothing afterwards unity --replace Some dconf reset dconf reset -f /org/compiz/ unity --reset-icons &disown Actually dconf did not work and I got this error: error: Cannot autolaunch D-Bus without X11 $DISPLAY Can anybody help me on that? This is my hardware (hope it helps in any way): Intel® Core™ i7-3770 ASUS GTX660TI-DC2-OG-2GD5 (NVIDIA driver is/was installed) ASUS P8Z77-V LX Corsair DIMM 8 GB DDR3-1600 Kit Samsung 830series 2,5" 256 GB (Windows is installed here) Seagate ST31000524AS 1 TB (3/4 are reserved for files; 1/4 is for Ubuntu (16GB swap included))

    Read the article

  • Oracle has some very helpful and free code...I think

    - by Casey
    I found that some of the code that Oracle uses is very useful so I don't have to re-invent the wheel. Given this is at the top of the file where the code in question is: /* * Copyright (c) 1997, 2006, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ If I leave the text intact, put it in my C++ header, and credit oracle for each method, and package the source into a static library...is it still a no-no?

    Read the article

  • Dependency injection and IOC containers in a closed project

    - by Puckl
    Does it make sense to assemble my project with dependency injection containers if I am the only one who will use the code of that project? The question came up when I read this IOC Article http://martinfowler.com/articles/injection.html The justification for using dependency injection in this article is that friends can reuse a class, and replace depending classes with their own classes because they get injected and not instantiated in the class. I would only use it to inject objects where they are needed instead of passing them through layers to their target. (Which is not so bad I learned here: Is it bad practice to pass instances through several layers?) (Maybe I will reuse parts of the project, who knows, but I don´t know if that is a good justification)

    Read the article

  • Understanding the maximum hit-rate supported by a web-server

    - by SNag
    I would like to crawl a publicly available site (and one that's legal to crawl) for a personal project. From a brief trial of the crawler, I gathered that my program hits the server with a new HTTPRequest 8 times in a second. At this rate, as per my estimate, to obtain the full set of data I need about 60 full days of crawling. While the site is legal to crawl, I understand it can still be unethical to crawl at a rate that causes inconvenience to the regular traffic on the site. What I'd like to understand here is -- how high is 8 hits per second to the server I'm crawling? Could I possibly do 4 times that (by running 4 instances of my crawler in parallel) to bring the total effort down to just 15 days instead of 60? How do you find the maximum hit-rate a web-server supports? What would be the theoretical (and ethical) upper-limit for the crawl-rate so as to not adversely affect the server's routine traffic?

    Read the article

  • How to setup the c++ rule of three in a virtual base class

    - by Minion91
    I am trying to create a pure virtual base class (or simulated pure virtual) my goal: User can't create instances of BaseClass. Derived classes have to implement default constructor, copy constructor, copy assignment operator and destructor. My attempt: class Base { public: virtual ~Base() {}; /* some pure virtual functions */ private: Base() = default; Base(const Base& base) = default; Base& operator=(const Base& base) = default; } This gives some errors complaining that (for one) the copy constructor is private. But i don't want this mimicked constructor to be called. Can anyone give me the correct construction to do this if this is at all possible?

    Read the article

  • How to connect to a WCF service using IP of the host machine where the service is hosted?

    - by Kumar
    I have a secured WCF service (https://<MachineName>:sslport/services) self hosted in a machine. Different instances of same service are deployed in differnt machines. From a client app, I am able to connect to theses services through code, i.e. using ChannelFactory() with the same endpoint address. But if I try to access the service using the endpoint address as https://<ipAddress>:sslport/services replacing machines name with machine IP address, I am getting some error stating "could not establish trust relationship". I know this is an error caused by SSL certificate that it could not establish a trust relationship. Are there any settings or any possibilities to make this work?

    Read the article

  • Existing Instance, Shiny New Disks

    - by merrillaldrich
    Migrating an Instance of SQL Server to New Disks I get to do something pretty entertaining this week – migrate SQL instances on a 2008 cluster from one disk array to another! Zut alors! I am so excited I can hardly contain myself, so let’s get started. (Only a DBA could love this stuff, am I right? I know.) Anyway, here’s one method of many to migrate your data. Assumption : this is a host-based migration, which just means I’m using the Windows file system to push the data from one set of SAN disks...(read more)

    Read the article

  • Few questions about Thunderbird in Ubuntu 11.10

    - by Darwell
    I am happy with Ubuntu choice to replace Evolution with Thunderbird. However, I am having some issues with it. 1. What do you use to make Thunderbird minimize (and disappear from the launcher) when closing it? It should still be running (and give me notifications of new emails) but do not bother me on Launcher or in the list of windows, because I need free space. I tried using FireTray extension, it works quite okay, but it's buggy - sometimes there are two instances of Thunderbird open when using it and also, Thunderbird's global menu disappears after some time of using the extension. 2. Is there a plan to integrate Thunderbird's calendars into Date indicator in Ubuntu? I'd like to see my events when I have clicked on the time. Thanks.

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • Framework licensing question [closed]

    - by nosarious
    I have a framework I have been developing but find myself being unable to work on it over the next year. I would like to make it open source in the interim to get others to use it and improve how it works. I would like to consider a licensing system that allows for multiple instances of the software for singular users (ie, a newspaper/magazine or zine hosting the code on their own). I would like to limit it from becoming the basis of a larger hosting service right now because it is intended to be part of a much larger hosting ecosystem which allows for create and share their work. Right now there is no license associated with it, which is why I am not posting a link here. Any help or suggestion on how to handle licensing this code for contributions and use would be appreciated, and if anyone would like to see examples or the github I would be happy to send it.

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • MatheMagics - Guess My Age - Method 2

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 2 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Multiply your age by 2. ·         Add 7 to the result ·         Multiply the result by 5. ·         Tell me the result. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. Let A denote your age (2A + 7) 5 = 10A + 35 so it is of the 3 digit form XY5 Now make two numbers out of the result - The last digit and the number before it. The Last digit is obviously 5, the other 2 (or 3 for a centenarian) and this number is the age + 3. Example: I am 76 years old and here is what happens when I do the steps 76 x 2 = 152 152 + 7 = 159 159 x 5 = 795 This is made of 79 and 5. And … 79 – 3 = 76 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age-method-1.aspx

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • BizTalk 2009 - Messages: Last 100 Sent

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received query so what about sent messages? In BizTalk 2004 we had a query in HAT to return the messages sent in the last day.  While not a direct replacement the following query replicates some of the usefullness of this query in a BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last one hundred tracked messages that were sent by BizTalk: Coming up Messages - last 50 suspended Service instances - last 100

    Read the article

  • removing an ssrs instance from a scale-out deployment

    - by Alex Bransky
    If you're like me you had at one time connected one of your Reporting Services instances to a report server database that was already in use by another instance.  This allows the instance to show up in the Scale-out Deployment section of the Reporting Services Configuration Manager.  My problem was that the server that got joined to the original server was no longer available as it had been repurposed, and when I clicked Remove Server to remove it from my scale-out it would fail because it couldn't contact the server.  After searching for a solution for quite some time I decided to look around in the report server database tables, and voila!  All I had to do was remove the old server from the Keys table.  I can't guarantee there won't be any side effects to this method, but it worked like a charm for me.

    Read the article

  • Trying to recover deleted Ubuntu partition

    - by user110984
    I made a mistake in logging into my 200 GB Ubuntu partition. I could not access Grub after that. Using a live CD I then ran Boot_Repair and apparently deleted the partition, I guess because I ran it from my 70 GB Windows partition. I can send the results of boot_info before that and of Boot_Repair. Then I ran TestDisk, which apparently found only dev/sda/ -320GB / 298 / GiB - WDC - WD3200BEVT-22A23T0 (Was there any more I could have done with TestDisk? I looked at the TestDisk_Step_By_Step example and found no way forward given that no other partitions turned up) I have run gpart and found this: /sda1 - 15 GB /sda2 - system reserved /sda3 - 70.15 GB /sda4 - extended 212.84 unallocated - 209.10 /sda5 - unknown 3.74 . I have been told I can recover the partition using gparted's Rescue start end command, but I don't know what to enter for start and end. [--EDIT: TestDisk Deeper Search stated that "the following partitions can't be recovered" and listed a 220-GB Linux partition 6 times. Then it stated that "The current number of heads per cylinder is 255 but the correct value may be 128" and I could try to change it in the Geometry menu (because apparently these are overlapping partitions) So should I do that?--]

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Internet Explorer 9 is coming Monday to a web near you

    - by brian_ritchie
    Internet Explorer 9 is finally here...well almost.  Microsoft is releasing their new browser on March 14, 2011. IE9 has a number of improvements, including: Faster, Faster, Faster.  Did I mention it is faster?   With the new browsers coming out from Mozilla, Google, and Microsoft, there have been a flood of speed test coverage.  Chrome has long held the javascript speed crown.  But according to Steven J. Vaughan-Nichols over at ZDNET..."for the moment at least IE9 is actually the fastest browser I’ve tested to date."  He came to this revelation after figuring out that the 32-bit version of IE9 has the new Chakra JIT (the 64-bit version doesn't).  It also has a DirectX-based rendering engine so it can do cool tricks once reserved for desktop applications. Windows 7 Desktop Integration.  Read my post for more details.  Unfortantely, they didn't integrate my ideas...at least not yet :) Hot new UI.  Ok, they "borrowed" some ideas from Chrome...but that is the best form of flattery. Standards Compliance.  A real focus on HTML5 and CSS3.  Definite goodness for developers. So, go get yourself some IE9 on Monday and enjoy! 

    Read the article

  • My new anti-patent BSD-based license: necessary and effective? [closed]

    - by paperjam
    I am writing multimedia software in a domain that is rife with software patents. I want to open source my software but only for the benefit of those who don't play the patent game, that is enthusiasts, small companies, research projects, etc. The idea is, if my code would infringe a software patent somewhere and a company pays to license that patent, they then lose the right to use and distribute my software. Now I detest license proliferation as much as anyone but I can't find an existing OSI approved license that does this. The GPL comes close, but it only restricts distribution, not use. I want to stop someone using my software should they obtain a patent license to do so. Does another license do this job? Is the wording below unambiguous? - I don't want a legal opinion, just whether it would be interpreted as I intend. Copyright (c) <year>, <copyright holder> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: [ three standard new-BSD conditions not shown here] * No patents are licensed from any third party in respect of redistribution or use of this software or its derivatives unless the patent license is arranged to permit free use and distribution by all. THIS SOFTWARE IS... [standard BSD disclaimer not shown here]

    Read the article

  • Confused about nova-network

    - by neo0
    I'm so sorry because this question doesn't related to Ubuntu. I asked in Openstack forum but this forum is not very active. So I think if someone have experience with Openstack Nova can help me with my problem. I've read some explanations about nova-network and how to configure it like this one from wiki: http://wiki.openstack.org/UnderstandingFlatNetworking I'm confusing about a detail. If every traffic from the instances must go through nova controller node, then why we still need the public interface for nova-compute node? Is it necessary? What happen when a request from outside to an instance. For example I have a controller node and a nova-compute node. In nova-compute node I run an instance with a Wordpress website. Then someone connect to the public IP of this instance. So the request go directly from router to the nova-compute node or from router to controller node then nova-compute node? Thank you!

    Read the article

  • XNA 4.0 Refresh AudioEngine, WaveBank and Others Not Found

    - by Peteyslatts
    I'm going through the Learning XNA 4.0 book, and unfortunately I installed XNA 4.0 refresh. All the code up until now has worked, with the exception of me needing to remove the Framework.Net and Framework.Storage. (As a side question, will this be problematic later?) The problem I'm having now is that in my Game1.cs file, I have imported all of the XNA.Framework libraries, and when I try and create instances of any of the following classes, an error pops up saying VisualStudio can't find them: AudiEngine, WaveBank, SoundBank, and Cue. I have googled around for a while, and the only solution I saw was to import Microsoft.Xna.Framework.Xact, but this doesn't seem to exist for me. Any help is much appreciated, Thanks Peter.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >