Search Results

Search found 2412 results on 97 pages for 'atom computing'.

Page 22/97 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Eucalyptus / no bootable EBS yet - any alternatives to Eucalyptus?

    - by itgorilla
    I've read a couple of articles that Eucalyptus doesn't support bootable EBS yet. This is a problem since you can't make a backup form an instance like you a able on Amazon cloud or Rack Space Cloud. If you ever reboot the physical Ecalyptus sever that's running the node controller the instance is gone along with all your settings. Are there any alternatives to Eucalyptus or is just the only game in town when it comes to open source cloud?

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • Custom Windows AMI for Amazon EC2

    - by cutyMiffy
    Hello: I am new to the Amazon EC2 services and I am planning to host a Windows based vm on Amazon. I have lot of fuzziness about AMIs on Amazon. So my first question would be, am I be able to install softwares and frameworks(e.g. Silverlight, .NET, etc) on a existing Windows instance that I select? And if not, how can I create a custom AMI so that I can install these softwares that I need before I submit it to Amamzon and launch? Thanks a lot for your help :) Deeply appriciated

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • DNS settings for SaaS in the cloud?

    - by Jeremy
    I am building a SaaS product. When a user signs up for an account they must select an alias for their site --------.getlaunchpoint.com. Right now I have an A record *.getlaunchpoint.com that points to the ip address server. However, with Azure I am not given an IP address. The suggested implementation is to make use of a CNAME. I need to create a CNAME for *.getlaunchpoint.com - getlaunchpoint.cloudapp.net GoDaddy does not support CNAME wildcards. Searching on Google I'm getting conflicting information... is CNAME wildcard a bad practice? I run into the same problem with Amazon EC2 if I want to make use of load balancers because you cannot tie a public IP address to an Amazon Load Balancer. Amazon also suggests the use of a CNAME. Any help would be appreciated.

    Read the article

  • EC2 instances keep becoming inaccessible via SSH, can I use elastic loadbalancer to check SSH connectivity?

    - by Rick
    This is mainly an issue for my development ec2 server as it seems that my instance keeps becoming inaccessible via SSH. It happened yesterday so I killed that one and started a new one and happened again later today. The server still works, my web application is accessible in a web browser but whenever I try to connect via SSH I get a pemrission denied (public key) error message in my terminal. I am 100% sure I am doing nothing wrong as I can create a new instance of the exact same AMI (its a personal custom AMI), change absolutely nothing, including using the same .pem key, and then am able to SSH into that new instance using the exact same command as before (just changing the IP address). I understand that ec2 can have issues but having this happen every day seems a bit odd.. I am using an m2.xlarge instance so I don't know if these tend to be unstable, in the past I have used a small instance and had it running for months with no problems which is why I find this so odd. I am looking into using loadbalancing but it seems the only "health" checks they offer is for http or tcp so I'm not sure if I can make it monitor for SSH connectivity. This is important for development as I may make 1-2 new pushes of an application a day and use SSH to do this. I have a designer that needs to have the app always accessible as he works with the front-end files to test output with the live application. Anyways, any advice / info is appreciated

    Read the article

  • How do I make a Windows virtual machine replicate to another datacenter/cloud?

    - by zippy
    We have a Windows 2008 VM running IIS and SQL Server Express (it's an all-in-one web application). We need to have another copy at our secondary datacenter site. What is the best way to do this? It doesn't have to be running all the time but it has to have almost the latest copy of the current VM. I took a look at VMWare Fault Tolerance and after the heart attack at the price I starting looking for another solution. If need be I wouldn't mind copying it over to a cloud VM provider, if I can find one that lets me copy my own VMs up and start them up without any conversion process.

    Read the article

  • Web application design with distributed servers

    - by Bonn
    I want to build a web application/server with this structure: main-server sub-server transaction-server (create, update, delete) view-server (view, search) authentication-server documents-server reporting-server library-server e-learning-server The main-server acts as host server for sub-server. I can add many sub-servers and connect it to main-server (via plug-play interface maybe), then it can begin querying data from another sub-servers (which has been connected to the main-server). The sub-servers can be anywhere as long as connected to internet. The main-server can manage all sub-servers which are connected to it (query data, setting permission between sub-servers, etc). The purpose is simple, the web application will be huge as the company grows, so I want to distribute it into small connected plug-able servers. My question is, does the structure above already have a standardized method? or are there any different views? what are the technologies needed? I need a lot of researches before the execution plan begin. thanks a lot.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encodin

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? I heard in the past of the project x264farm; unfortunately, it is not actively developed anymore, and isn't compatible with newer x264 builds. I'm looking for a current solution, which is compatible with newer builds of x264. Just to note this, I'm a Windows user, so I'm only looking for Windows solutions (or at the very least, Linux). I've also seen the ELDER distributed encoder, but the quality varies depending on the settings you use - I'd prefer a solution similar to x264farm as noted above (the documentation outlines the encoding process), but one that is compatible with new(er) x264 builds, and is preferably actively developed. Final Edit: Unfortunately, the bounty for this question has expired, and I haven't found a decent solution for this. So if at any time someone finds a new, distributed encoding solution for x264 (or any h.264 coded, for that matter) - please answer this question! I'd love to discover an ideal method to make this work!

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :) Regards, www.TechProceed.com

    Read the article

  • Recommended drive encryption solution

    - by Chris Driver
    Hello, I will soon be purchasing a number of laptops running Windows 7 for our mobile staff. Due to the nature of our business I will need drive encryption. Windows BitLocker seems the obvious choice, but it looks like I need to purchase either Windows 7 Enterprise or Ultimate editions to get it. Can anyone offer suggestions on the best course of action: a) Use BitLocker, bite the bullet and pay to upgrade to Enterprise/Ultimate b) Pay for another 3rd party drive encryption product that is cheaper (suggestions appreciated) c) Use a free drive encryption product such as TrueCrypt Ideally I am also interested in 'real world' experience from people who are using drive encryption software and any pitfalls to look out for. Many thanks in advance... UPDATE Decided to go with TrueCrypt for the following reasons: a) The product has a good track record b) I am not managing a large quantity of laptops so integration with Active Directory, Management consoles etc is not a huge benefit c) Although eks did make a good point about Evil Maid (EM) attacks, our data is not that desirable to consider it a major factor d) The cost (free) is a big plus but not the primary motivator The next problem I face is imaging (Acronis/Ghost/..) encrypted drives will not work unless I perform sector-by-sector imaging. That means an 80Gb encrypted partition creates an 80Gb image file :(

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Recommend a Rackspace Cloud Server API Language Binding?

    - by Alex R
    Rackspace publishes only a hard-to-use HTTP and JSON/XML based "API" (they call it an API but it's really a non-standard Web Service without a WSDL). There are dozens of open-source language bindings to choose from. I have tried three of them so far and they're all horrible (incomplete, buggy, and/or undocumented). Can anybody recommend a language binding which is reasonably complete, well documented, and bug-free? I can use Perl, Python, PHP, or Java. My ultimate objective is to create a script/program that will provision a server, launch a process inside it, wait for the process to finish, copy the results to the local server, and destroy the remote server. What's the best choice for that? Thanks

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • How to Architect a system on AWS for scaling (with a MySQL back-end)

    - by Edan Maor
    I'm trying to understand how to architect an Amazon Web Services application. As I understand it, the whole point of using something like AWS is to make the eventual scaling easier, so I'm trying to understand how to do that. I have an instance, running off of EBS (EBS-based instance, not a regular instance). My application (a Django app) uses MySQL as a back-end. So the question is, where am I supposed to install the MySQL? Do I install it on the same instance? In which case, as far as I can tell, I can't simply create more server instances from that image. Or am I supposed to simply spin up another server as a DB server, and run off of that? Thanks for any help!

    Read the article

  • Problems when loop over a series of ssh-ed commands

    - by Jack Medley
    I have a series of server machines which I want to run the same command on. Each command takes hours and (even though I am running the commands using nohup and setting them to run in the background) I have to wait for each to finish before the next starts. Here is roughly how I have set it up: On the host machines: for i in {1..9}; do ssh RemoteMachine${i} ./RunJobs.sh; done Where RunJobs.sh on each remote machine is: source ~/.bash_profile cd AriadneMatching for file in FileDirectory/Input_*; do nohup ./Executable ${file} & done exit Does anyone know of a way such that I dont have to wait for each job to finish before the next starts? Or alternatively a better way of doing this, I have a feeling what I am do is fairly sub-optimal. Cheers, Jack

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >