Search Results

Search found 23347 results on 934 pages for 'salesforce service cloud'.

Page 19/934 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • ACT On' OVCA for Cloud Providers Program Launch Webcast: June 12, 2014 - 9am UKT / 10am CET / 11am EET

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE We invite you to join the OVCA for Cloud Providers ‘ACT On' program launch at 11am BST / 12noon CET on June 12. · More and more customers realize the value of shifting to a Converged IT Infrastructure, this is why IDC expects this market to grow 40% annually for the next 2 years. · The Oracle Virtual Compute Appliance (OVCA) with attached ZFS storage is the perfect answer to this market trend. By providing rapid application and cloud deployment, OVCA allows customers to cut capital expenditures by up to 50% and deploy key applications up to 7x faster. · For Partners, OVCA supports their journey to consolidation, virtualization and cloud, and allows them to sell higher value services to their customers. The objective of this webcast is to share with you the OVCA value proposition, help you identify the best target partners, and provide you with the Enablement and Demand Generation content and resources. To register and for further details click here /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • “I could use a little help here” or “I can do it myself, thank you” for Cloud Projects

    - by BuckWoody
    Windows Azure allows you to write code in languages within the .NET stack, you can use Java, C++, PHP, NodeJS and others. Code is code - other than keeping things stateless, using a Web or Worker Role in Azure is not all that different from working with an on-premises system. However…. Working in a scalable, component-based stateless architecture that can use federated security is not all that common for many developers. Some are used to owning the server, scaling up, and state-full paradigms that have a single security domain. Making the transition whilst trying to create a new software application or even port a previous one can be daunting. Sure, we have absolutely tons of free training, kits, videos, online books and more to learn on your own, but some things like architecture can be pivotal as you move along. So the question is, should you just strike out on your own for a Cloud project, or get Microsoft Consulting Services or another partner to work with you on your first one? I use a few decision points to help guide the projects I assist in. Note: I’m a huge fan of having help that ends up giving you training and leaves you in charge. If you do engage with someone to help you, make sure you keep this clear and take more and more ownership yourself as the project progresses. How much time do you have? Usually the first thing I ask is about the timeline for the project. It doesn’t matter how skilled you are, if you have a short window to get things done it’s better to get help - especially if this is your first cloud project. Having someone that knows the platform well can save you amazing amounts of time. If you have longer, then start with the training in the link above and once you feel confident, jump in. How complex is the project? If there are a lot of moving parts, it’s best to engage a partner. The reason is that certain interactions - particularly things like Service Bus or Data Integration  - can be quite different than what you may have encountered before. How many people do you have? I have a “pizza rule” about projects I’ve used in my career - if it takes over two pizzas to feed everyone on the project, it’s too big and will fail. That being said, one developer and a one-week deadline does not a good project make, usually. It’s best to have at least one architect (or someone in that role) guiding the project along, and at least two developers to work on a cloud project. That’s a generalization of course, since I’ve seen great software on Azure with one developer writing code all by herself, but for more complex projects, more (to a point) is better. The nice thing about bringing on a partner is that you don’t have to hire them full time - they help you and then they go away. How critical is the project? There’s no shame in using some help. If the platform is new, if the project is large and complex, and if it is critical to the business, you should engage a partner. That’s regardless of Cloud or anything else - get some help. You don’t want to hit your company’s bottom line in a negative way, but you have to innovate and get them a competitive advantage. Do your research, make sure the partner is qualified to help you, and get it done. Don’t let these questions scare you off. There are lots of projects you can implement on Windows and SQL Azure with nothing other than the Software Development Kit (SDK) that you get for free with Windows Azure. And assistance comes in many forms - sometimes just phone support, a friend you can ask. Microsoft Consulting Services or any of our great partners. You can get help on just the architecture piece or have them show you how to write the code. They’ll get involved as little or as much as you like.

    Read the article

  • Debugging Visual Studio 2010 Unit Test and WCF Service in one IDE instance

    - by Dr.HappyPants
    I have created a WCF service in Visual Studio 2010 along with some supporting assemblies. I have also created a test project which contains multiple unit tests for the service and the supporting assemblies. Right now I have them all in one solution with the Test project having a service reference (http) to the WCF service. If I debug the WCF service and select "Run checked tests" in a Test List I created, I can debug the WCF service without a problem. Note: I cannot select Debug Checked Tests while debugging the WCF service. (Because the IDE is already debugging?) If I open the Test project in another instance of VS 2010, debug the WCF service and then select "Debug Checked Tests" - I can debug both my tests and the WCF service. However - I would like to (and my question is) be able to debug my tests and my service in a single IDE. Is this possible?

    Read the article

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • VPN issue: SSTP Service service started and then stopped

    - by Ampersand
    When I was trying to set up a VPN connect on my laptop running Windows 7 Ultimate, I got this error: Network Connections Cannot load the Remote Access Connection Manager service. Error 711: The operation could not finish because it could not start the Remote Access Connection Manager service in time. Please try the operation again. I traced through some service dependencies and discovered that Secure Socket Tunneling Protocol Service was set to Manual. However, when I try to manually start the service, I get: Services The Secure Socket Tunneling Protocol Service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Setting all the services involved to Automatic did not help. SSTP just showed Automatic and Stopped in the Services panel. I found a solution that involved booting in Safe Mode and deleting the contents of C:\Windows\System32\LogFiles\WMI\RtBackup. This solution worked, and I could set up a vpn connection, but only until I rebooted again. TL;DR I'm looking for a way to permanently enable Secure Socket Tunneling Protocol Service and other vpn-related services permanently so I don't have to reboot into safe mode and delete files every time I need to connect to a vpn.

    Read the article

  • Unable to send mail to hotmail from rackspace cloud

    - by Jo Erlang
    I'm having issue sending mail from postfix on a rackspace cloud instance for my domain. Hotmail says "550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. " Here is the mail log Sep 20 08:02:59 mydomain postfix/smtpd[1810]: disconnect from localhost[127.0.0.1] Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: to=<[email protected]>, relay=mx3.hotmail.com[65.55.92.184]:25, delay=0.19, delays=0.1/0.01/0.06/0.01, dsn=5.0.0, status=bounced (host mx3.hotmail.com[65.55.92.184] said: 550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command)) Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: lost connection with mx3.hotmail.com[65.55.92.184] while sending RCPT TO I have implemented rDNS, SPF and DKIM they all are looking fine. I have checked my IP and domain, on most of the spam black lists and it is listed as ok on those, (not listed as spamming IP) What should I try next?

    Read the article

  • High availability for Windows Service under Windows Server 2003

    - by empi
    Hi. I have a following situation: I need to deploy a windows service that listens for incoming request on tcp port (basically WCF service). I have a High Availability requirement - the service must be deployed on two servers and if the service stops (only the service, not the whole server) on one server, all the requests must be redirected to the second one. For me it looks like a basic failover scenario. How can I achieve this on Windows Server 2003? Should I use Microsoft Cluster Service or Network Load Balancing? The important part is that the process of swapping the servers should not concern the clients (the client must see only single address / single host or domain name). Thanks in advance for help.

    Read the article

  • Windows service running under network credentials doesn't autostart

    - by David Alpert
    I have a Subversion Server running as a resident service on a Windows XP Pro machine. That service needs to access a secure network fileshare, so I used the Services-Properties-Log On tab to tell the service to run as a user who has access to the target fileshare. That works out fine until the machine restarts, when the service fails to autostart. I am able to start it manually by logging in, going back to that Services-Properties-Log On tab and reconfiming the explicit credentials. Do I have to manually start this service under alternate credentials every time the machine reboots? Is there something else I can do to make sure that my Subversion server service autostarts with proper access to authenticate against this network share?

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • Suggestions for open source testing tool for cloud computing

    - by vikraman
    Hi, I want to know if there is any open source testing tool for cloud computing. We have built a cloud framework with Xen, Eucalyptus, Hadoop, HBase as different layers. I am not looking at testing each of these tools separately, but i want to test them from the perspective of fitting into a cloud environment (for example scalability of xen hypervisor to handle multiple VMs). Would be great if you can suggest me some tool (open source) for the above.

    Read the article

  • Deploying and hosting scala in the cloud?

    - by TiansHUo
    I am starting a web app considering scalability as one of the top priorities. What would be the benefits of this: cassandra scala lift vs the traditional LAMP on the cloud? Since from what I've read, please correct me, the cloud itself is scalable I have never seen anyone deploy scala on the cloud before. Is it worth the effort to learn the platform? Is it ready for production use?

    Read the article

  • Cloud computing?

    - by Suraj
    I'm writing a report advising on future technologies that a manufacturing company could use. I've highlighted a number of advanced manufacturing technologies such as CAD etc. However, I want to bring cloud computing into the report just to score some extra points. I am not sure how one would bring together cloud computing with the advanced technologies though. Basically what would be the process of integrating these technologies into a cloud computing "environment"? Say the organisation buys a CAD package, how could they make use of cloud computing here?

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Server service fails to start, event 7023, error 1079

    - by toffitomek
    Hello, Environment: Windows Server 2008 R2 fully patched, working as Domain Controller in Win 2003 native domain. Users started to report problems with share, it turned out that server service won't start. I've scrambled google but can't find a thing. Any ideas will be appreciated. Thanks in advance :) Service fails to start, then when starting service I get: Windows could not start the Server service on SERVERNAME. Error 1079: The account specified for this service is different from the account specified for other services running in the same process. In System Event Log: Event 7023 The Server service terminated with the following error: The account used is a server trust account. Use your global user account or local user account to access this server.

    Read the article

  • Are there any portable Cloud APIs that allow you to easily change cloud hosts?

    - by MindJuice
    I am creating a web-based RESTful service and want to cloud-enable it for scalability. I don't want to get locked into one cloud provider though. I'd like to be able to switched between Go Grid or Amazon EC2, etc. as pricing and needs evolve. Is there a common API to control the launch, monitoring and shutdown of cloud resources? I've seen Right Scale, but their pricing is just from another planet. Similarly, is there a common API for cloud storage?

    Read the article

  • Windows service (hosting WCF service) stops immediately on start up

    - by Thr33Dii
    My Question: I cannot navigate to the base address once the service is installed because the service won't remain running (stops immediately). Is there anything I need to do on the server or my machine to make the baseAddress valid? Background: I'm trying to learn how to use WCF services hosted in Windows Services. I have read several tutorials on how to accomplish this and it seems very straight forward. I've looked at this MSDN article and built it step-by-step. I can install the service on my machine and on a server, but when I start the service, it stops immediately. I then found this tutorial, which is essentially the same thing, but it contains some clients that consume the WCF service. I downloaded the source code, compiled, installed, but when I started the service, it stopped immediately. Searching SO, I found a possible solution that said to define the baseAddress when instantiating the ServiceHost, but that didnt help either. My serviceHost is defined as: serviceHost = new ServiceHost( typeof( CalculatorService ), new Uri( "http://localhost:8000/ServiceModelSamples/service" ) ); My service name, base address, and endpoint: <service name="Microsoft.ServiceModel.Samples.CalculatorService" behaviorConfiguration="CalculatorServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8000/ServiceModelSamples/service"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> I've verified the namespaces are identical. It's just getting frustrating that the tutorials seem to assume that the Windows service will start as long as all the stated steps are followed. I'm missing something and it's probably right in front of me. Please help!

    Read the article

  • Autoscale Rackspace Cloud, Scalr or DIY?

    - by Andre Jay Marcelo-Tanner
    I'm looking into creating a setup on Rackspace Cloud that will allow me to autoscale my webservers (no db) on demand. Preferably using something like response time. I've read into configuration tools like Puppet/Chef, but I'm thinking I can just launch from prepared server images that are ready to go. Is there any tool out there already that can monitor my existing node response times and then launch or scale up new ones based upon certain variables like average X load over Y time? I see there are commercial offerings like Scalr, Rightscale, but how would I do this myself?

    Read the article

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • Private cloud solution [Eucalyptus,OpenStack, Nimbus] for Java deployments [Glassfish, Tomcat]

    - by Tadas D.
    I am interested in a way to have private cloud which would host Glassfish (or Tomcat) server. Which option from Eucalyptus, Openstack or Nimbus would be best to deploy java applications on it? Or maybe there is something other and I am looking wrong at the problem? The way I imagine this, that I should have some shared storage that I could expand by introducing new nodes to this cluster and have easy management for glassfish instances: something like virtual machines images that I can start and stop on demand and that image is shared among nodes. I don't need concrete step-by-step solution here but guidelines how this should be done are very welcome.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by Koobz
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's infrastructure to support multiple databases etc.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by bundini
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like hand partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's supporting infrastructure to support multiple databases etc.

    Read the article

  • Using AutoMySQLBackup on Rackspace Cloud

    - by xref
    Since Rackspace Cloud only allows FTP access it makes using AutoMySQLBackup a little trickier, and while it is at least creating DB dumps I get errors in the backup log: ###### WARNING ###### Errors reported during AutoMySQLBackup execution.. Backup failed Error log below.. .../backups/automysqlbackup: line 1791: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1855: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 803: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1972: /usr/bin/du: Permission denied Since files are being created I'm assuming the find command failing has to do with actually rotating out and deleting the old backups? Line 803: find "${CONFIG_backup_dir}/${subfolder}${subsubfolder}" -mtime +"${rotation}" -type f -exec rm {} \; Any ideas for alternatives?

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >